首页 the empirical foundations of calibration

the empirical foundations of calibration

举报
开通vip

the empirical foundations of calibration American Economic Association The Empirical Foundations of Calibration Author(s): Lars Peter Hansen and James J. Heckman Source: The Journal of Economic Perspectives, Vol. 10, No. 1 (Winter, 1996), pp. 87-104 Published by: American Economic Association Stabl...

the empirical foundations of calibration
American Economic Association The Empirical Foundations of Calibration Author(s): Lars Peter Hansen and James J. Heckman Source: The Journal of Economic Perspectives, Vol. 10, No. 1 (Winter, 1996), pp. 87-104 Published by: American Economic Association Stable URL: http://www.jstor.org/stable/2138285 . Accessed: 15/07/2011 21:49 Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at . http://www.jstor.org/action/showPublisher?publisherCode=aea. . Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. American Economic Association is collaborating with JSTOR to digitize, preserve and extend access to The Journal of Economic Perspectives. http://www.jstor.org Journal of Economic Perspectives-Volume 10, Number 1-Winter 1996-Pages 87-104 The Empirical Foundations of Calibration Lars Peter Hansen and James J. Heckman G eneral equilibrium theory provides the intellectual underpinnings for modern macroeconomics, finance, urban economics, public finance and numerous other fields. However, as a paradigm for organizing and syn- thesizing economic data, it poses some arduous challenges. A widely accepted em- pirical counterpart to general equilibrium theory remains to be developed. There are few testable implications of general equilibrium theory for a time series of aggregate quantities and prices. There are a variety of ways to formalize this claim. Sonnenschein (1973) and Mantel (1974) show that excess aggregate demand func- tions can have "arbitrary shapes" as long as Walras' Law is satisfied. Similarly, Har- rison and Kreps (1979) show that a competitive equilibria can always be constructed to rationalize any arbitrage-free specification of prices. Observational equivalence results are pervasive in economics. There are two responses to this state of affairs. One can view the flexibility of the general equilibrium paradigm as its virtue. Since it is hard to reject, it provides a rich apparatus for interpreting and processing data.' Alternatively, general equilibrium theory can be dismissed as being empirically ir- relevant because it imposes no testable restrictions on market data. Even if we view the "flexibility" of the general equilibrium paradigm as a virtue, identification of preferences and technology is problematic. Concern about the ' Lucas and Sargent (1988) make this point in arguing that early Keynesian critiques of classical eco- nomics were misguided by their failure to recognize this flexibility. * Lars Peter Hansen is Homerj. Livingston Professor of Economics, andJames Heckman is Henry Schultz Distinguished Service Professor of Economics and Director of the Center for Social Program Evaluation at the Irving B. Harris School of Public Policy Studies, all at the University of Chicago, Chicago, Illinois. 88 Journal of Economic Perspectives lack of identification of aggregate models has long troubled econometricians (for example, Liu, 1960; Sims, 1980). The tenuousness of identification of many models makes policy analysis and the evaluation of the welfare costs of programs a difficult task and leads to distrust of aggregate models. Different models that "fit the facts" may produce conflicting estimates of welfare costs and dissimilar predictions about the response of the economy to changes in resource constraints. Numerous attempts have been made to circumvent this lack of identification, either by imposing restrictions directly on aggregate preferences and technologies, or by limiting the assumed degree of heterogeneity in preferences and technolo- gies. For instance, the constant elasticity of substitution specification for preferences over consumption in different time periods is one of workhorses of dynamic sto- chastic equilibrium theory. When asset markets are sufficiently rich, it is known from Gorman (1953) that these preferences can be aggregated into the preferences of a representative consumer (Rubinstein, 1974). Similarly, Cobb-Douglas aggre- gate production functions can be obtained from Leontief micro technologies ag- gregated by a Pareto distribution for micro productivity parameters (Houthakker, 1956). These results give examples of when simple aggregate relations can be de- duced from relations underlying the micro behavior of the individual agents, but they do not justify using the constructed aggregate relations to evaluate fully the welfare costs and benefits of a policy.2 Micro data offer one potential avenue for resolving the identification problem, but there is no clear formal statement that demonstrates how access to such data fully resolves it. At an abstract level, Brown and Matzkin (1995) show how to use information on individual endowments to obtain testable implications in exchange economies. As long as individual income from endowments can be decomposed into its component sources, they show that the testability of general equilibrium theory extends to production economies. Additional restrictions and considerable price variation are needed to identify microeconomic preference relations for data sets that pass the Brown-Matzkin test. Current econometric practice in microeconomics is still far from the nonpara- metric ideal envisioned by Brown and Matzkin (1995). As shown by Gorman (1953), Wilson (1968), Aigner and Simon (1970) and Simon and Aigner (1970), it is only under very special circumstances that a micro parameter such as the intertemporal elasticity of substitution or even a marginal propensity to consume out of income can be "plugged into" a representative consumer model to produce an empirically concordant aggregate model. As illustrated by Houthakker's (1956) result, micro- economic technologies can look quite different from their aggregate counterparts. In practice, microeconomic elasticities are often estimated by reverting to a partial 2 Gorman's (1953) results provide a partial justification for using aggregate preferences to compare alternative aggregate paths of the economy. Even if one aggregate consumption-investment profile is preferred to another via this aggregate preference ordering, to convert this into a Pareto ranking for the original heterogeneous agent economy requires computing individual allocations for the path-a daunting task. Lars Peter Hansen and James J. Heckman 89 equilibrium econometric model. Cross-market price elasticities are either assumed to be zero or are collapsed into constant terms or time dummies as a matter of convenience. General equilibrium, multimarket price variation is typically ignored in most microeconomic studies. Battle lines are drawn over the issue of whether the microeconometric simpli- fications commonly employed are quantitatively important in evaluating social wel- fare and assessing policy reforms. Shoven and Whalley (1972, 1992) attacked Har- berger's use of partial equilibrium analysis in assessing the effects of taxes on out- puts and welfare. Armed with Scarf's algorithm (Scarf and Hansen, 1973), they computed fundamentally larger welfare losses from taxation using a general equi- librium framework than Harberger computed using partial equilibrium analysis. However, these and other applications of general equilibrium theory are often greeted with skepticism by applied economists who claim that the computations rest on weak empirical foundations. The results of many simulation experiments are held to be fundamentally implausible because the empirical foundations of the exercises are not secure. Kydland and Prescott are to be praised for taking the general equilibrium analysis of Shoven and Whalley one step further by using stochastic general equi- librium as a framework for understanding macroeconomics.3 Their vision is bold and imaginative, and their research program has produced many creative analyses. In implementing the real business cycle program, researchers deliberately choose to use simple stylized models both to minimize the number of parameters to be "calibrated" and to facilitate computations.4 This decision forces them to embrace a rather different notion of "testability" than used by the other general equilibrium theorists, such as Sonnenschein, Mantel, Brown and Matzkin. At the same time, the real business cycle community dismisses conventional econometric testing of para- metric models as being irrelevant for their purposes. While Kydland and Prescott advocate the use of "well-tested theories" in their essay, they never move beyond this slogan, and they do not justify their claim of fulfilling this criterion in their own research. "Well tested" must mean more than "familiar" or "widely accepted" or "agreed on by convention," if it is to mean anything. Their suggestion that we "calibrate the model" is similarly vague. On one hand, it is hard to fault their advocacy of tightly parameterized models, because such models are convenient to analyze and easy to understand. Aggregate growth coupled with uncertainty makes nonparametric identification of preferences and technology extremely difficult, if not impossible. Separability and homogeneity re- strictions on preferences and technologies have considerable appeal as identifying assumptions. On the other hand, Kydland and Prescott never provide a coherent 'The earlier work by Lucas and Prescott (1971) took an initial step in this direction by providing a dynamic stochastic equilibrium framework for evaluating empirical models of investment. 4The term "real business cycle" originates from an emphasis on technology shocks as a source of business cycle fluctuations. Thus, real, as opposed to nominal, variables drive the process. In some of the recent work, both real and nominal shocks are used in the models.' 90 Journal of Economic Perspectives framework for extracting parameters from microeconomic data. The same charge of having a weak empirical foundation that plagued the application of deterministic general equilibrium theory can be lodged against the real business cycle research program. Such models are often elegant, and the discussions produced from using them are frequently stimulating and provocative, but their empirical foundations are not secure. What credibility should we attach to numbers produced from their "computational experiments," and why should we use their "calibrated models" as a basis for serious quantitative policy evaluation? The essay by Kydland and Pres- cott begs these fundamental questions. The remainder of our essay is organized as follows. We begin by discussing simulation as a method for understanding models. This method is old, and the problems in using it recur in current applications. We then argue that model cali- bration and verification can be fruitfully posed as econometric estimation and test- ing problems. In particular, we delineate the gains from using an explicit econo- metric framework. Following this discussion, we investigate current calibration prac- tice with an eye toward suggesting improvements that will make the outputs of computational experiments more credible. The deliberately limited use of available information in such computational experiments runs the danger of making many economic models with very different welfare implications compatible with the evi- dence. We suggest that Kydland and Prescott's account of the availability and value of micro estimates for macro models is dramatically overstated. There is no filing cabinet full of robust micro estimates ready to use in calibrating dynamic stochastic general equilibrium models. We outline an alternative paradigm that, while contin- uing to stress the synergy between microeconometrics and macro simulation, will provide more credible inputs into the computational experiments and more ac- curate assessments of the quality of the outputs. Simulation as a Method for Understanding Models In a simple linear regression model, the effect of an independent variable on the dependent variable is measured by its associated regression coefficient. In the dynamic nonlinear models used in the Kydland-Prescott real business cycle research program, this is no longer true. The dynamic nature of such models means that the dependent variable is generated in part from its own past values. Characterizing the dynamic mechanisms by which exogenous impulses are transmitted into en- dogenous time series is important to understanding how these models induce fluc- tuations in economic aggregates. Although there is a reliance on linearization tech- niques in much of the current literature, for large impulses or shocks, the nonlinear nature of such models is potentially important. To capture the richness of a model, the analyst must examine various complicated features of it. One way to do this is to simulate the model at a variety of levels of the forcing processes, impulses and parameters. The Empirical Foundations of Calibration 91 The idea of simulating a complex model to understand its properties is not a new principle in macroeconomics. Tinbergen's (1939) simulation of his League of Nations model and the influential simulations of Klein and Goldberger (1955) and Goldberger (1959) are but three of a legion of simulation exercises performed by previous generations of economists.' Fair (1994) and Taylor (1993) are recent ex- amples of important studies that rely on simulation to elucidate the properties of estimated models. However, the quality of any simulation is only as good as the input on which it is based. The controversial part of the real business cycle simulation program is the method by which the input parameters are chosen. Pioneers of simulation and of economic dynamics like Tinbergen (1939) and Frisch (1933) often guessed at the parameters they used in their models, either because the data needed to identify the parameters were not available, or because the econometric methods were not yet developed to fully exploit the available data (Morgan, 1990). At issue is whether the state of the art for picking the parameters to be used for simulations has im- proved since their time. Calibration versus Estimation A novel feature of the real business cycle research program is its endorsement of "calibration" as an alternative to "estimation." However, the distinction drawn between calibrating and estimating the parameters of a model is artificial at best.6 Moreover, the justification for what is called "calibration" is vague and confusing. In a profession that is already too segmented, the construction of such artificial distinctions is counterproductive. It can only close off a potentially valuable dialogue between real business cycle research and other research in modern econometrics. Since the Kydland-Prescott essay is vague about the operating principles of calibration, we turn elsewhere for specificity. For instance, in a recent description of the use of numerical models in the earth sciences, Oreskes, Shrader-Frechette and Belitz (1994, pp. 642, 643) describe calibration as follows: In earth sciences, the modeler is commonly faced with the inverse prob- lem: The distribution of the dependent variable (for example, the hydraulic head) is the most well known aspect of the system; the distribution of the independent variable is the least well known. The process of tuning the model-that is, the manipulation of the independent variables to obtain a ' Simulation is also widely used in physical science. For example, it is customary in the studies of fractal dynamics to simulate models in order to gain understanding of the properties of models with various parameter configurations (Peitgen and Richter, 1986). 6As best we can tell from their essay, Kydland and Prescott want to preserve the term "estimation" to apply to the outputs of their computational experiments. 92 Journal of Economic Perspectives match between the observed and simulated distribution or distributions of a dependent variable or variables-is known as calibration. Some hydrologists have suggested a two-step calibration scheme in which the available dependent data set is divided into two parts. In the first step, the independent parameters of the model are adjusted to reproduce the first part of the data. Then in the second step the model is run and the results are compared with the second part of the data. In this scheme, the first step is labeled "calibration" and the second step is labeled "verification." This appears to be an accurate description of the general features of the "calibration" method advocated by Kydland and Prescott. For them, data for the first step come from micro observations and from secular growth observations (see also Prescott, 1986a). Correlations over time and across variables are to be used in the second step of verification. Econometricians refer to the first stage as estimation and the second stage as testing. As a consequence, the two-stage procedure described by Oreskes, Shrader- Frechette and Belitz (1994) has a straightforward econometric counterpart.7 From this perspective, the Kydland-Prescott objection to mainstream econo- metrics is simply a complaint about the use of certain loss functions for describing the fit of a model to the data or for producing parameter estimates. Their objection does not rule out econometric estimation based on other loss functions. Econometric estimation metrics like least squares, weighted least squares or more general method-of-moments metrics are traditional measures of fit. Differ- ence among these methods lie in how they weight various features of the data; for example, one method might give a great deal of weight to distant outliers or to certain variables, causing them to pull estimated trend lines in their direction; another might give less weight to such outliers or variables. Each method of esti- mation can be justified by describing the particular loss function that summarizes the weights put on deviations of a model's predictions from the data. There is nothing sacred about the traditional loss functions in econometrics associated with standard methods, like ordinary least squares. Although traditional approaches do have rigorous justifications, a variety of alternative loss functions could be explored that weight particular features of a model more than others. For example, one could estimate with a loss function that rewarded models that are more successful in predicting turning points. Alternatively, particular time series frequencies could be deemphasized in adopting an estimation criterion because misspecification of a model is likely to contaminate some frequencies more than others (Dunsmuir and Hannan, 1978; Hansen and Sargent, 1993; Sims, 1993). 7See Christiano and Eichenbaum (1992) for one possible econometric implementation of this two-step approach. They use a generalized method of moments formulation (for
本文档为【the empirical foundations of calibration】,请使用软件OFFICE或WPS软件打开。作品中的文字与图均可以修改和编辑, 图片更改请在作品中右键图片并更换,文字修改请直接点击文字进行修改,也可以新增和删除文档中的内容。
该文档来自用户分享,如有侵权行为请发邮件ishare@vip.sina.com联系网站客服,我们会及时删除。
[版权声明] 本站所有资料为用户分享产生,若发现您的权利被侵害,请联系客服邮件isharekefu@iask.cn,我们尽快处理。
本作品所展示的图片、画像、字体、音乐的版权可能需版权方额外授权,请谨慎使用。
网站提供的党政主题相关内容(国旗、国徽、党徽..)目的在于配合国家政策宣传,仅限个人学习分享使用,禁止用于任何广告和商用目的。
下载需要: 免费 已有0 人下载
最新资料
资料动态
专题动态
is_645595
暂无简介~
格式:pdf
大小:453KB
软件:PDF阅读器
页数:19
分类:金融/投资/证券
上传时间:2011-07-16
浏览量:19