Further, without the ability to simulate counterfactuals and more generally to make claims of external validity, the role of empirical analysis is limited to analyzing historical past events without being able to use this accumulated knowledge in a constructive and organized way.
Solving structural models, especially dynamic stochastic models, involves numerical methods. These numerical methods are used to simulate outcomes and counterfactuals as well as to generate moments for use in estimation.
We call these models fully specified because they allow a complete solution to the individual's optimization problem as a function of the current information set.
Rust (1992) and Magnac and Thesmar (2002) ... their conclusion is that dynamic discrete choice models need some strong identification assumptions to work.
A structural model that fully specifies behavior can go much faster than simply estimating a parameter of interest or testing a particular theoretical hypothesis. To achieve athis, a number of simplifying assumptions have to be made, to maintain feasibility and some level of transparency.
rely on a sufficient statistic that summarizes choices not being modeled explicitly... the sufficient statistic is the amount of consumption allocated to the period. THe econometric model defines a relationship between labor supply and wages, conditional on consumption and looks like a traditional labor supply model. The model is partially specified, in the sense that there is not enough information to solved for the optimal choice as a function of the information set:... but we cannot simulate counterfactuals.
This idea builds on the concept of separability .... Gorman 1995 ... separability is a restriction on preferences. More generally, separability is a way of specifying conditions on preferences and technologies that allow us to focus on some aspect of economic behavior without having to deal explicitly with the broader complications of understanding understanding all aspects of behavior at once.
Partially specified structural models are an important empirical tool... in a way that is robust to different specifications in the parts of the model that remain unspecified,
One of the most analyzed partially specified models is the Euler equation for consumption... It does not require explicit information on the budget constraint because the level of consumption is used as a sufficient statistic for the marginal utility of wealth.
A treatment effect model focuses on identifying a specific causal effect of a policy or intervention while attempting to say the least possible about the theoretical context... Heckman and Robb (1985), Heckman, LaLonde, and Smith (1999),...
To get anything more than that out of the experiment, broadening its external validity, will typically require an explicit model, incorporating behavioral and oftern functional form assumptions... generalizing the results to a scaled-up version of the policy is impossible without a model.
Feldstein (1995)... the external validity of the exercise is limited by the fact that the overall effect of reducing the top tax rate depends on how the entire tax schedule was changed and how people are distributed across it, which reduces the generality of the result to the specific context.
Not all treatment effect modes are created equal:.. The point of randomized experiments is that results do not depend on strong assumptions about individual behavior, ... However, this clarity is lost with quasi-experimental approaches such as differenc-in-differences, where the validity of the results typically depends on assumptions relating to the underlying economic behavior that are left unspecified.
Athey and Imbens (2006) ... the assumption underlying difference-in-differences is that the outcome variable in the absence of treatment is generated by a model that is (strictly) monotonic in the unobservables, and the distribution of such unobservables must be invariant over time. These assumptions restrict the class of behaviroal models that are consistent with the causal interpretation of difference in differences.
Quasi-experimental approaches on the other hand, while not focusing on structural parameters, rely on underlying assumptions about behavior that potentially limit the interpretability of the results as causal. The attraction of these approaches is their simplicity. However thier usefulness is limited by the lack of a framework that can justify external validity, which is general requires a more complete specification of the economic model that will allow the mechanisms to be analyzed and the conclusions to be transferred to a different set of circumstances.... structural models... provide the framework for understanding how a particular policy may translate in different environments.
in most cases, randomized experiments only offer discrete sources of variation--policy is on or off--which is far from the requirements for identification in dynamic models, which would typically require continuous variation (Heckman and Navarro 2007).
The basic gain from using the structural models is that they allow a better understanding of the mechanisms and analysis of counterfactuals... These rich behavioral models offer a deeper insight of just how an intervention can affect the final outcome. Understanding the mechanisms is central to designing policies and avoiding unintended effects as well as for building a better understanding of whether a policy can reasonably be expected to work at all.
Combining randomized experiments and credible quasi-experimental variation with structural models seems to bering together the best of both approaches of empirical economics:.... This approach is growing in influence.
computational constrains remain and to some extent will always be with us.
structural models and particularly dynamic stochastic mdels involve nonlinear estimation, and require numerical methods to solve the model to generate moments for estimation.
Estimation of dynamic structural modesl involves nonlinear optimization with respect to the unknoen parameters. However, the key difficulty with this estimation is that we cannot express analytically the functional relationship between the dependent variables and the unknown parameters.
constructing the likelihood function is impossible or computationally intractable for many models.
fitting moments.... The downside of this approach is that it does not use all information in the data, and we do not have an easily implementable way of defining which moments need to be used to ensure identification. One must carefully define what are the key features of the data that will identify the parameters.
with small samples, Altonji and Segal (1996) emphasize that the identity matrix (that is, equal weighting) may be the best choice because using hard-to-estimate higher-order moments of the data that constitute the weight matrix may actually introduce substantial bias.
Chernozhukov and Hong (2003) have shown how Markov Chain Monte Carlo can provide estimators that are asymptotically equivalent to minimizing the method-of-moments criterion.
the idea of combinig randomized experiments or plausible quasi-experimental variation together with structural economic models can strengthen the value of empirical work substantially. Indeed, researchers should think more ambitiously and use theory to define experiments that need to be run to test and estimate important models.
The trade-off between providing the necessary complexity to be economically meaningful and maintaining transparency is at the heart of good structural modeling.