Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống
1
/ 35 trang
THÔNG TIN TÀI LIỆU
Thông tin cơ bản
Định dạng
Số trang
35
Dung lượng
268,64 KB
Nội dung
2.2. Identification of Causal Effects 93 P S 2 Observed points (Q, P ) Q S 1 S 3 D 1 Figure 2.7. Identification of the demand curve using movements of the supply curve. If all we observe is price and quantity, then we cannot hope to identify either a demand or a supply function even if we assume all shifts are linear shifts of the underlying curves; simply, there are many potential shifting demand or supply curves that could have produced the same set of market outcomes. For example, while we clearly need two equations to generate a single point as the prediction, the location of the point does not constrain the slope of either line. From price and quantity data alone it is impossible to empirically quantify the effect of an increase in prices on the quantity demanded and therefore to extract information such as the demand elasticity. A major contribution associated with the work of authors such as Wright, Frisch, Koopmans, Wald, Mann, Tintner, and Haavelmo is to understand what is necessary to identify supply and demand curves (or indeed parameters in any set of linear simultaneous equations). 24 Between them they showed that in order to identify the demand function, we will need to be able to exploit shifts in the supply function which leave the demand function unchanged. Figure 2.7 makes clear why: if we know that the observed equilibrium outcomes correspond to a particular demand function, we can simply use the shifts in supply to trace out the demand function. Thus supply shifts will allow us to identify which parameter values (intercept, slope) describe the demand function. Supply shifters could be cost-changing variables such as input prices or exchange rates. Naturally, for such a variable to actually work to identify a demand curve we need it to experience sufficient variation in our data set. Too little data variation would give an estimate of the demand function only in a very small data range and the extrapolation to other quantity or price levels would likely be inaccurate. Furthermore, in practice the demand curve will itself not usually stay constant, so that we are in fact trying to identify movements in supply that generate movement in price and quantity that we know are due to supply curve movement rather than 24 For a history of the various contributions from authors, see Bennion (1952). 94 2. Econometrics Review P Q P 1 P 0 Q 1 Q 0 S 1 D 1 D 2 D 3 Figure 2.8. Movements in the demand curve can be used to help identify the supply curve. demand curve movement (as distinct from a situation where all we know is that one of them must have moved to generate a different outcome). If, on the other hand, the demand is shifting and the supply is constant, we cannot identify the demand function but we could potentially identify the supply function. This situation is represented in figure 2.8. A shifting demand will, for example, arise when effective but unobserved (by the econometrician) marketing campaigns shift demand outwards, increasing the amount that consumers are collectively willing to buy at any given price. As we described earlier, an OLS estimate of the coefficient on the price variable will in this case be biased. It will capture both the effect of the higher price and the effect of the advertisement. This is because the higher price coincides in this case with surges in the demand that are unexplained by the regression. This induced positive correlation, in this case between unobserved demand shifters and price, generates “endogeneity” bias and in essence our estimator faces an identification problem. On occasion we will find genuinely upward-sloping demand curves, for exam- ple, when analyzing extreme versions of the demand for “snob” goods, known as Veblen goods (expensive watches or handbags, where there may be negative network externalities so that consumers do not want lots of people to own them and actively value the fact that high prices drive out others from the market) (Leibenstein 1950). Another example is when analyzing extreme cases of inferior goods, where income effects actually dominate the direct effects of price rises and again we may believe demand curves actually slope upward. However, these are rare potential exceptions as even in the case of snob and inferior goods the indirect effects must be very strong indeed to actually dominate the direct effect (or the latter must be very weak). In contrast, it is extremely common to estimate apparently positive price coefficients during the early phases of a demand study. Ruling out the obviously wrong upward- sloping demand curves is, however, relatively easy. In many cases, the effect of endogeneity can be far more subtle, causing a bias in the coefficient that is not quite 2.2. Identification of Causal Effects 95 so obviously wrong: suppose we estimate a log–log demand curve and find a slope coefficient of 2. Is that because the actual own-price elasticity of demand is 2 or is that because the actual own-price elasticity of demand is 4 and our estimates are suffering from endogeneity bias? In practical settings ruling out the obviously crazy is a good start, and pushes us in the right direction. In this case, a good economic theory which clearly applies in a given practical context can tell us that the demand curve must (usually) slope down. This is not a very informative restriction, though it may suffice to rule out some estimates. Unfortunately, economic theory typically does not place very strong restrictions on what we should never (or even rarely) observe in a data set. 25 As a result, it may be of considerable help but will rarely provide a panacea. The study of identification 26 establishes sets of theoretical conditions that estab- lish that given “enough” data we can learn about particular parameters. 27 After such an “identification theorem” is proven, however, there remain very important practi- cal questions, namely, (i) how many data constitute “enough” and (ii) in any given empirical project do we have enough data? If we have theoretical identification and the mean independence restrictions between unobservables and exogenous variables hold, we may still not be able to identify the parameters of our model if there is insufficient real data variation in the exogenous variables. In a given data set, if our parameters are not being “well” identified because of lack of data variation, we will find large estimated standard errors. Given enough data these may become small but “enough” may sometimes require a huge amount of data. In practical compe- tition agency decision making where we can collect the best cost data that firms hold, such difficulties are regular occurrences when we try to use cost data from firms to identify their demand equations. Basically, often the cost data are relatively infrequently collected or updated and hence do not contain a great deal of variation and hence information. Such data will in reality often have a hard time identifying demand curves, even if in theory the data should be very useful. In practical terms, the general advice is therefore the following: (a) Consider whether the identification assumptions (e.g., conditional mean independence) that the estimator uses are likely to be valid assumptions. (b) Put a substantial amount of thought into finding variables that industry experi- ence and company documents indicate will significantly affect each of supply and demand conditions. 25 Supply-side theory can be somewhat helpful as well. For instance, every industrial organization economist knows that no profit-maximizing firm should price at a point where demand is inelastic. Between them the restrictions from profit maximization and utility theory (demand slopes down) tell us that own-price elasticities should usually be greater than 1. In relying on such theory it is important to keep in mind whether it fits the industry; for example, we know that when low prices today beget high demand today but also high demand tomorrow (as in experience goods) firms may have incentives to price at a point where static demand elasticities are below 1 in magnitude. 26 For a further discussion of the formalities of identification, see the annex to this chapter (section 2.5). 27 A discussion of identification of supply and demand in structural equations can be found in chapter 6. 96 2. Econometrics Review (c) Pay particular attention to finding variables which are known to affect either supply or demand but not both. (d) Use estimates of standard errors to help evaluate whether parameters are actu- ally being identified in a given data set. Large standard errors often indicate that you do not have enough information in the sample to actually achieve identification even if in theory (given an infinite sample) your model is well identified. In an extreme case of a complete failure of identification, standard errors will be reported to be either extremely large or even reported as missing values in regression output. Even if we cannot account for all relevant covariates, identification of demand (or supply) functions is often possible if we correctly use the methods that have devel- oped over the years to aid identification. We now turn to a presentation of the tech- niques most often used in empirical analysis to achieve identification. For example, we introduce fixed-effects estimators which can account for unobserved shifts cor- related with our variables. We also study the important technique of instrumental variables, which instead of using the conditional mean restriction associated with OLS that the regressors be independent of the error term, EŒU j X D 0, relies upon the alternative moment restriction that another variable Z be uncorrelated with the error, EŒU j Z D 0, but sufficiently related to X to predict it, so that this predic- tion of X by Z is what is actually used in the regression. We will also describe the advantages and disadvantages of using “natural experiments” and event studies that attempt to use exogenous shocks to the explanatory variable to identify its causal effect. 2.2.3 Methods Used to Achieve Identification The study of identifying causal effects is an important one and unsurprisingly a variety of techniques have been developed, some crude others very subtle. At the end of the day we want to do our best to make sure that the estimate of the parameter is not capturing any other effect than the one it is supposed to capture, namely the direct effect of that particular explanatory variable on the outcome. We first discuss the simplest of all methods, the “fixed-effect” technique before moving on to discuss the technique of “instrumental variables” and the technique commonly described as using “natural experiments.” Finally, we also introduce event studies, which share the intuition of natural experiments. 28 2.2.3.1 Fixed Effects We have said that one reason why identifying causal effects is difficult is that we must control for omitted variables which have a simultaneous effect on one or more 28 There is an active academic debate regarding the extent of similarity and difference between the instrumental variable and natural experiment approaches. We do not attempt to unify the approaches here but those interested in the links should see, for example, Heckman and Vytlacil (2005). 2.2. Identification of Causal Effects 97 explanatory variables and on the outcome. 29 One approach is to attempt to control for all the necessary variables, but that is sometimes impossible; the data may simply not be available and anyway we may not even know exactly what we should be controlling for (what is potentially omitted) or how to measure it. In very special circumstances a fixed-effects estimator will help overcome such difficulties. For example, in production function estimation it is common to want to measure the effect of inputs on outputs. One difficulty in doing so is that firms can generally have quite different levels of productivity, perhaps because firms can have very good or fairly poor processes for transforming inputs into outputs. If processes do not change much over short time periods, then we call ˛ i firm i’s productivity and propose a model for the way in which output is transformed into inputs of the form y it D ˛ i C w it ˇ Cu i ; where y it is output from firm i in period t and w it is the vector of inputs. As a profession, economists have a very hard time finding data that directly measures “firm productivity,” at least without sending people into individual factories to per- form benchmarking studies. On the other hand, if the processes do not vary much in relation to the frequency of our data, we might think that productivity can be assumed constant over time. If so, then we can use the fact that we observe multiple observationsonafactory’s inputs and output to estimate the factory’sproductivity˛ i . To emphasize the distinction we might write (more formally but equivalently) the fixed-effects model as y it D n X gD1 d ig ˛ g C w it ˇ Cu it ; where d ig is a dummy variable taking the value 1 if i D g and zero otherwise. The advantage of this way of writing the model is that it makes entirely clear that d ig is “data” while the ˛ g s are parameters to be estimated (so that to construct, for example, an OLS estimator we would construct an X matrix with rows x 0 it D .d i1 ;:::;d in ;w it /). The initial formulation is useful as shorthand but some people find it so concise that it confuses. If we ignored the role of productivity in our model, we would use the regression specification y it D w it ˇ Cv it so that if the DGP were y it D ˛ i C w it ˇ 0 C u it ; we would have an unobservable which consisted of v it D ˛ i C u it . In that case OLS estimators will typically suffer from an endogeneity bias since the error term v it D ˛ i C u it and variables w it will be correlated because of the 29 Most econometrics texts have a discussionof fixed-effects estimators. One nice discussion in addition to those in standard textbooks is provided in chapter 3 of Hsiao (1986). 98 2. Econometrics Review presence of firm i’s productivity ˛ i in the error term. The reason is that firms’ productivity levels will typically also affect their input choices, i.e., the value of w it . Indeed, while discussing the relationships between production functions, cost functions, and input demand equations in chapter 1, we showed that firms’ input demands will depend on their level of productivity. In particular, to produce any given amount of output, high-productivity firms will tend to use few inputs. There is, however, at least one additional effect, namely that high-productivity firms will also tend to produce a lot and therefore use lots of inputs. As a result, we cannot theoretically predict the direction of the overall bias, although most authors find the latter effect dominates empirically. Specifically, most authors find that OLS estimates of production functions find the parameters on input demands tend to be above fixed-effects estimators because of the positive bias induced by efficient firms producing a lot and hence using lots of inputs. 30 To use a fixed-effects approach we must have a data set where there are sufficient observations that share unobserved characteristics. For instance, in our example we assumed we had data from each firm on multiple occasions. More generally, we need to be able to “group” observations and still have enough data in the groups to use the “within-group” data variation in independent and dependent variables to identify the causal effects. Continuing our example, it is the fact that we observe each firm on multiple occasions that will allow us to estimate firm-specific fixed effects; the “group” of observations involves those across time on a given firm. The general approach to applying the fixed-effects technique is to add a group- specific dummy variable that controls for those omitted variables that are assumed constant across members of the same group but that may vary across groups. A group fixed effect is a dummy variable that takes the value 1 for all observations belonging to the group, perhaps a city or a firm, and 0 otherwise. The dummy variable will control for the effect of belonging to such a group so that any group-specific unobserved characteristic that might have otherwise affected both the dependent and the explanatory variable is accounted for. In practice, a fixed-effects regression can be written as y it D G X gD1 d ig ˛ g C w it ˇ Cu it ; where d ig are series of dummy indicators that take a value of 1 when observation i belongs to group g, where g indexes the G groups, so g D 1;:::;G. The coefficient ˇ identifies the effect of the variables in w it on outcome y it while controlling for the factors which are constant across members of the group g, which are encapsulated in ˛ g . The parameters in this model are often described as being estimated using “within- group” data variation, although the term can sometimes be a misnomer since this 30 See, for example, the comparison of OLS and fixed-effects estimates reported in table VI of Olley and Pakes (1996). 2.2. Identification of Causal Effects 99 regression would in fact use both within- and between-group data variation to identify ˇ. To see why, consider the more general model: y it D G X gD1 d ig ˛ g C G X gD1 .d ig w it /ˇ g C u it ; in which there are group-specific intercept and also group-specific slope parameters. Provided the groups of observations are mutually exclusive, the OLS estimates of this model can be shown to be O ˇ g D  X .i;t/2I g .w it Nw g /.w it Nw g / 0 à 1  X .i;t/2I g .w it Nw g /.y it Ny g / 0 à O˛ g DNy g O ˇ g Nw g 9 > > = > > ; for each g D 1;:::;G; where I g defines the set of i; t observations in group g and where Nw g and Ny g are respectively the averages across i; t observations in the group. To see this is true, write the model in matrix form and stack the sets of observations in their groups and note that the resulting matrices X g and X h will satisfy X 0 g X h D 0 for g ¤ h because d ig d ih D 0 (see also, for example, Hsiao 1986, p. 13). Recall in a standard panel data context, the group of data will mean all the observations for a given firm over time so the within-group averages are just the averages over time for a given firm. Similarly, the summations in the expression for O ˇ g involve summations over observations in the group, i.e., over time for a given firm. Note that the estimates of both the intercept and slope parameters for each group g depend only on data coming from within group g and it is in that sense that estimates of this general model are truly only dependent on within-group data variation. In contrast, when estimating the more specific fixed-effects model first introduced, which restricts the slope coefficients to be equal across groups so that ˇ 1 D ˇ 2 D Dˇ G Á ˇ, the OLS estimates of the model become O ˇ D  G X gD1 X .i;t/2I g .w it Nw g /.w it Nw g / 0 à 1  G X gD1 X .i;t/2I g .w it Nw g /.y ig Ny g / 0 à ; O˛ g DNy g O ˇ Nw g for each g D 1;:::;G; which clearly, via the estimator O ˇ, uses information from all of the groups of data. Despite the fact that this latter estimator uses information from all groups, this 100 2. Econometrics Review estimator is often known as the “within-group” estimator. The reason is that the estimator is numerically identical to the one obtained by estimating a model using variables in differences from their group means, namely estimating the following model by OLS where the group-specific fixed effects have been differenced out: .y it Ny g / D ˇ.w it Nw g / C e it ; where e it D u it Nu g . Thus, in this particular sense the estimator is a within-group estimator, namely it exploits only variation in the data once group-specific intercept terms have been controlled for. Note that this is not the same as only using within-a-single-group data variation, but rather that the OLS estimator uses the variation within all of the groups to identify the slope parameters of interest. Since it involves the ratio of averaged covariance to the averaged variance, the estimator O ˇ can perhaps be understood as an average of the actual “within- group” estimators O ˇ g over all groups. In the case of the restricted model, where the DGP involves slope parameters that are the same across groups, the parameters .˛ 1 ;:::;˛ G / successfully account for all of the between-group data variation in the observed outcomes y i and a fixed-effects regression will add efficiency compared with using only data variation within a single group, in the way that the more general model did. However, when the true (DGP) slope coefficients are actually different, such an estimator will not be consistent. The econometric analysis above suggested that fixed effects can be an effective way to solve an endogeneity problem and hence can help identify causal relation- ships. In doing so the various estimators are using particular dimensions of the variation in our data in an attempt to identify true causal relationships. OLS with- out any group-specific parameters uses all the covariation between outcome and control variables. In contrast, introducing a full set of group-specific intercepts and slope coefficients will allow us to use only within-group data variation to identify causal effects while the more conventional fixed-effects estimator uses within-group data variation and some across-group data variation to identify the causal effects. Fixed effects are particularly helpful if (i) we have limited data on the drivers of unobserved differences, (ii) we know that the true causal effects, those estimated by O ˇ, are common across groups, and (iii) we know that unobserved factors common to a group of observations are likely to play an important role in determining the outcome y. Of course, these latter two assumptions are strong ones. The second assumption requires that the various groups of data must be sufficiently similar for the magnitude of causal effects to be the same while the last assumption requires that members of each group must be sufficiently similar that the group-specific constant term in our regressions will solve the endogeneity problem. These assumptions are rarely absolutely true and so we should rely on fixed-effects estimators only having 2.2. Identification of Causal Effects 101 taken a view on the reasonableness of these approximations. For example, in reality firms’ processes and procedures do both differ across firms and also evolve over time. Even if adding labor to each firm causes the firm to be able to produce the same amount of additional output as would be required for the causal effect of labor on output to be the same for every firm, any factor affecting productivity which varies over time for a given firm (e.g., as a result of firms adopting new technology or adapting their production process) would be missed by a fixed effect. Such fac- tors will prevent successful identification if the movement reintroduces a correlation between an explanatory variable and the error in the regression. 31 Because fixed-effects regression uses within-group data variation, there must be enough variation of the variables x and y within each group (or at least some groups) to produce an effect that can be measured with accuracy. When the variation in the explanatory variables is mostly across groups, a fixed-effects approach is unlikely to be produce useful results. In such circumstances the estimated standard errors of the fixed-effects estimator will tend to be very large and the value of the estimator of the slope parameters will be “close” to zero. In the limit, if in our data set w it Nw g for all i in each group g, so there is little within-group data variation, the reported estimate of ˇ would either be approximately zero, very large, or ill-defined—each of the latter two possibilities occurs if the matrix inverse is reported as close to or actually singular so that we are effectively dividing by numbers very close to zero. The reason is that the fixed-effects estimator is not being well-identified by the available data set even though if we had enough or better data we would perhaps be able to successfully identify the parameters of the model. Another technique related to the fixed-effects method and often used is the random-effects regression. Random-effects regression treats the common factor within a group ˛ g as a modeled part of the error term and treats it as a common but random shock from a known distribution rather than a fixed parameter to be estimated. The advantage of this technique is that it does not result in a very large number of regressors, as can be the case in fixed-effects regression, and this can ease computational burdens. On the other hand, it makes the nontrivial assumption that the common characteristics shared by the group are random and not correlated with any of the explanatory variables included in the regression (see, for example, the discussion and potential solution in Mundlak (1978)). 32 The fixed-effects disadvan- tage of computational constraints is far less important now than it was previously and as a result fixed-effects estimators have tended to be preferred in recent years. 31 For a proposal for dealing with time-varying situations, see Olley and Pakes (1996) and also the important discussion in Ackerberg et al. (2005). Ensuring that production functions are estimated using data from firms with similar “enough” underlying production technologies will help mitigate concerns that causal effects differ across firms. For example, the same production function is unlikely to be appropriate for both power stations generating hydroelectricity and those using natural gas as a fuel. 32 If we do have data on measures/causes of firm productivity, we might consider the model with ˛ i D 0 x i C e i , which also has the advantage that the resulting ˛ i can be correlated with included w it variables (see Mundlak 1978; Chamberlain 1982, 1984). 102 2. Econometrics Review For further discussion and examples, see chapter 3, where we examine a fixed- effects approach to production function estimation and chapter 5, where we examine a fixed-effects approach to estimating the effect of market structure on prices charged in a market. 2.2.3.2 Instrumental Variables Instrumental variables are used frequently in the empirical analysis of competition issues. 33 For example, they are the most common solution to endogeneity and iden- tification problems in the estimation of demand functions. Formally, suppose we have the following single-equation regression model: y i D x 1i ˇ 1 C x 0 2i ˇ 2 C " i ; where ˇ D .ˇ 1 ;ˇ 2 /, x 0 i D .x 1i ;x 0 2i /, and where the vector of variables x 0 2i are exogenous and x 1i is endogenous. That is, the variable x 1i is correlated with the error term " i so that an OLS estimator’s identification restriction is not valid. 34 Instrumental variable techniques propose using an alternative identifying assump- tion, namely they suppose that we have a set of variables z i D .z 1i ;x 2i / which are correlated with x i but uncorrelated with the error term. For example, in a demand equation, where y i denotes sales and x 1i denotes prices we may believe that the DGP does not satisfy the identification assumption used for OLS estimators that unob- served determinants of sales are uncorrelated with prices so that EŒ" i j x i ¤ 0. But we assume the alternative identification assumption needed to apply instrumental variable techniques that there is a variable z i correlated with price but that does not affect sales in an independent way so that EŒ" i j z i D 0 and EŒx i j z i ¤ 0. It turns out that these assumptions allow us to write down a number of consistent estimators for our parameters of interest ˇ including (i) a first instrumental variable estimator and (ii) the two-stage least-squares (2SLS) estimator. 35 To define a first IV estimator, stack up the equation y i D x 0 i ˇ C" i over i D 1;:::;n observations so that we can write the matrix form y D Xˇ C", where y is .n 1/, X is .n k/ for our data set and define the .n p/ matrix of instrumental variables Z analogously. Define a first instrumental variable estimator: O ˇ IV D ŒZ 0 X 1 Z 0 y D Ä 1 n Z 0 X 1 1 n Z 0 y: 33 Instrumental variables as a technique are usually attributed jointly to Reiersol (1945) and Geary (1949). See also Sargen (1958). For more recent literature, see, for example, Newey and Powell (2003). For formal econometric results, see White (2001). 34 For simplicity we present the case where there is one endogenous variable. If we have more than one endogenous variable in x 1i , little substantive changes beyond the fact that we will need at least one variable in z i , i.e., one instrument, for each endogenous variable in x 1i and in the 2SLS regression approach we will have one set of first-stage regression output for each endogenous variable. 35 We call the former estimator “a” first IV estimator deliberately, since 2SLS is also an IV estimator and, as we shall see, generally a more efficient one. [...]... medical science and also in social experiments Even economists, working directly or indirectly for firms and governments can and do run experiments, at least in the sense that we might evaluate demand and advertising elasticities of demand by exogenously varying prices or advertising and observing the impact on sales Auction design experiments are also frequently used—firms want to understand what happens... /? For example, following Angrist et al (2000), in a general supply -and- demand system we would have the two general equations: S Qi D Qi Pi ; wiS ; uS / i D and Qi D Qi Pi ; wiD ; uD /: i In the supply -and- demand system we studied graphically, these functions were linear in both variables and parameters More generally, if the supply -and- demand functions are respectively strictly increasing in uS and. .. products, and t D 1; : : : ; T time periods The model consists of a store fixed effect ds with parameter s for each store s and a state-specific product fixed effect djMA and djRI , where, for example, djMA takes on the value 1 for the observation on product j in MA and 0 elsewhere so that the model can explain differences in price levels for each product Statespecific time dummy variables d tMA and d tRI... requirement For now we note that such violations can often be picked up informally by examining plots of residuals For example, OLS estimation requires EŒui j xi D 0 and this can be verified (at least partially) by examining a plot 2.3 Best Practice in Econometric Exercises 117 of ui ; xi / (For an example see the discussion of Nerlove (1963) presented in O chapter 3.) Fitted Value Plots Plotting the data and. .. questions and the more usual experience is to iterate back and forth between them On the other hand, in the context of antitrust investigations, the question and laboratory may be very well defined For example, we may need to evaluate the impact of a merger on prices in a particular industry Even so we will need to think hard about the environment in which our firms operate, the strategic and nonstrategic... frontier analysis (SFA), and data envelopment analysis (DEA) 3.1 Accounting and Economic Revenue, Costs, and Profits 125 3.1 Accounting and Economic Revenue, Costs, and Profits To econometrically estimate cost functions an analyst will of course typically need access to cost data (alternatives using production and input demand data were explored in chapter 1) Unfortunately, analysts must tread carefully... Reconciling Accounting and Economic Costs There are some important differences between the definition of costs used by economists and those used in practice in managerial and even more particularly in financial accounting While such differences are quite generally and regularly stressed by industrial organization economists, following in particular an influential article by Fisher and McGowan (1983),3 doing... markets, for instance, regularly attempt to extract useful information, even from published financial accounts Such information is used, for example, to build or at least inform firm valuation models to price equities It is therefore possible, indeed probable, that academic industrial organization has somewhat “thrown out the baby with the bathwater” in almost entirely discarding accounting information... great deal of very good data and careful analysis For that reason, and in the face of budgetary constraints, many agencies prefer to make qualitative judgements about whether—or not—prices are justified by Ramsey pricing style arguments 3.1 Accounting and Economic Revenue, Costs, and Profits 127 used if it had been invested in the next best alternative, appropriately adjusted for risk profile Similarly,... products A and B are demand substitutes This experiment uses only time series variation but closing and reopening periods mean we might have multiple relevant events which could help 39 The diversion ratio (DR) between two products A and B is the proportion of sales that are captured by product B when product A increases its price by some amount The DR tells us about substitutability between products and . market. 2.2 .3. 2 Instrumental Variables Instrumental variables are used frequently in the empirical analysis of competition issues. 33 For example, they are the most common solution to endogeneity and. caused by movement in costs (which is in addition to any variation in prices explained by movements in the demand curve caused by the exogenous demand shifting variables X 2 ). Standard errors and. demanded and therefore to extract information such as the demand elasticity. A major contribution associated with the work of authors such as Wright, Frisch, Koopmans, Wald, Mann, Tintner, and