Heckman, James J., Econometric Causality (April 2008). NBER Working Paper No. w13934. Available at SSRN: https://ssrn.com/abstract=1121726
- The concept of causality...is based on the notion of controlled variation - variation in treatment holding other factors constant. It is distinct from other notions of causality based on prediction (e.g. Granger, 1969, and Sims, 1972). Holland (1986) makes useful distinctions among commonly invoked definitions of causality.
- The econometric approach to policy evaluation...emphasizes the provisional nature of causal knowledge. Some statisticians reject the notion of the provisional nature of causal knowledge and seeka an assumption-free approach to causal inference (see, e.g., Tukey, 1986). However, human knowledge advance by developing theoretical models and testing them against data. The models used are inevitably provisional and depend on a priori assumtions. Even randomization, properly executed, cannot answer all of the relevant causal questions.
- Many "causal models" in statietics are incomplete guides to interpreting data or for sugeesting answers to particular policy questions. They are motivated by the experiment as an ideal.... The construction of counterfactual outcomes is based on appeals to intuition and not on formal models.
- Because the mechanisms determining outcome selection are not modeled in the statistical approach, the metaphor of "random assignment" is often adopted. This emphasis on randomization or its surrogates, like marching or instrumental variables, rules out a variety of alternative channels of identification of counterfactuals from population or sample data.
- One reason why many statistical models are incomplete is that they do not specify the sources of randomness generating variability among agents, i.e., they do not specify why otherwise observationally identical people make different choices and have different outcomes given the same choice. They do not distinguish what is in the agent's information set from what is in the observing statistician's information set, although the distinction is fundamental in justifying the properties of any estimator for solving selection and evaluation problems.
- The goal of the econometric literature, like the goal of all science, is to understand the causes producing effects so that one can use empirical versions of the models to forecast the effects of interventions never previously experienced, to calculate a variety of policy counterfactuals and to use scientific theory to guide the choice of estimators and the interpretation of the evidence. These activities require development of a more elaborate theory than is envisioned in the current literature on causal inference in statistics.
- Many causal models in statistics are black-box devices designed to investigate the impact of "treatments" -often complex packegaes of interventions- on observed outcomes in a given environment. Unbundling the components of complex treatments is rarely done. ... In the terminology of Holland (1986), the distinction is between understanding the "effects of causes" (the goal of the treatment effect literature as a large group of statisticians define it) or understanding the "causes of effects" (the goal of the econometric literature building explicit models).
- it produces parameters do not lend themselves to extrapolation out of sample or to accurate forecasts of impacts of other policies besides the ones being empirically investigated.... It lacks the ability to provide explanations for estimated "effects " grounded in theory. When the components of treatments vary across studies, knowledge does not accumulate across treatment effect studies, whereas it does accumulate acorss studies estimating models generated from common parameters that are featured in the econometric approach.
- Forecasting the Impacts of Interventions Never Historically Experienced to Various Environments, ... This problme requires that one use past history to forecast the consequences of new policies. It is a fundamental problem in knowledge.
- A central feature of the conometric approach to program evaluation is the evaluation of subjective valuations as perceived by decision makers and not just objective valuations.
- The required invariance assumptions state, for example, that utilities are not affected by randomization or the mechanism of assignment of constraints.
- Another invariance assumption rules out social interactions in both subjective and objective outcomes. It is useful to distinguish invariance of objective outcomes from invariance of subjective outcomes. Randomization may affect subjective evaluations through its effect of adding uncertainty into the decision process but it may not affect objective valuations. The econometric approach models how assignment mechanisms and social interactions affect choice and outcome equations rather than postulating a priori that invariance postulates for outcomes are always satisfied for outcomes.
The Evaluation Problem
- In the absence of a theory, there are no well defined rules for constructing counterfactual or hypothetical states or constructing the rules for assignment to treatment.
- Articulated scientific theories provide algorithms for generating the universe of internally consistent, theory-consistent counterfactual states.
- structural econometric analysis... Empirical models explicitly based on scientific theory pursue this avenue of investigation. Some statisticians call this the "scientific approach" and are surprisingly hostile to it. See Holland (1986).
- the one in the recent treatment effect literature, redirects attention away from estimating the determinants of Y(s,w) (outcomes corresponding to state (policy/treatment) for an agent w) toward estimating some population version of Y(s,w)-Y(s',w), most often a mean, without modeling what factors give rise to the outcome or the relationship between the outcomes and the mechanism selecting outcomes. The statistical treatment effect literature ... ignores the problem of forecasting a new policy in a new environment or a policy never previously experienced. Forecasting the effects of new policies is a central task of science.
Counterfactuals, Causality and Structural Econometric Models
- structural models and use them as devices for generating counterfactuals.
The econometric Model vs. the Neyman-Rubin Model
- The statistical treatment effect literature originates in the statistical literature on the design of experiments. It drqws on hypothetical experiments to define causality and thereby creates the impression in the minds of many of its users that random assignment is the most convincing way to identify causal models. Some would say it is the only way to identify causal models... Rubin and Neyman offer no model of the choice of which outcome is selected.
- Unlike the Neyman-Rubin model, these (selection) models do not start with the experiment as an ideal but they start with well-posed, clearly articulated models for outcomes and treatment choice where the unobservables that underlie the selection and evaluation problem are made explicit... Randomization is a metaphor and not an ideal or "gold standard".
- In contrast to the econometric model, the Holland (1986) - Rubin (1978) definition of causal effects is based on randomization.... a dichotomy between randomization and non-randomization, and not an explicit treatment of particular selction mechanisms in the non-randomized case as developed in the econometrics literature.
- The econometric approach to causal inference is more comprehensive than the Neyman-Rubin model of counterfactuals. It analyzes models of the choice of counterfactuals and the relationship between choice equations and the counterfactuals.