Articles for ECOhttps://www.nowpublishers.com/feed/ECOArticles for ECOhttp://www.nowpublishers.com/article/Details/ECO-048Non-Experimental Data, Hypothesis Testing, and the Likelihood Principle: A Social Science Perspective<h3>Abstract</h3>We argue that frequentist hypothesis testing – the dominant statistical evaluation paradigm in empirical research – is fundamentally unsuited for analysis of the non-experimental data prevalent in economics and other social sciences. Frequentist tests comprise incompatible repeated sampling frameworks that do not obey the Likelihood Principle (LP). For probabilistic inference, methods that are guided by the LP, that do not rely on repeated sampling, and that focus on model comparison instead of testing (e.g., subjectivist Bayesian methods) are better suited for passively observed social science data and are better able to accommodate the huge model uncertainty and highly approximative nature of structural models in the social sciences. In addition to formal probabilistic inference, informal model evaluation along relevant substantive and practical dimensions should play a leading role. We sketch the ideas of an alternative paradigm containing these elements.<h3>Suggested Citation</h3>Tom Engsted and Jesper W. Schneider (2024), "Non-Experimental Data, Hypothesis Testing, and the Likelihood Principle: A Social Science Perspective", Foundations and Trends® in Econometrics: Vol. 13: No. 1, pp 1-66. http://dx.doi.org/10.1561/0800000048Mon, 12 Feb 2024 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-047A Simple Method for Predicting Covariance Matrices of Financial Returns<h3>Abstract</h3>We consider the well-studied problem of predicting the timevarying
covariance matrix of a vector of financial returns.
Popular methods range from simple predictors like rolling
window or exponentially weighted moving average (EWMA)
to more sophisticated predictors such as generalized autoregressive
conditional heteroscedastic (GARCH) type methods.
Building on a specific covariance estimator suggested by Engle
in 2002, we propose a relatively simple extension that
requires little or no tuning or fitting, is interpretable, and
produces results at least as good as MGARCH, a popular
extension of GARCH that handles multiple assets. To evaluate
predictors we introduce a novel approach, evaluating
the regret of the log-likelihood over a time period such as
a quarter. This metric allows us to see not only how well
a covariance predictor does overall, but also how quickly it
reacts to changes in market conditions. Our simple predictor
outperforms MGARCH in terms of regret. We also test
covariance predictors on downstream applications such as
portfolio optimization methods that depend on the covariance
matrix. For these applications our simple covariance
predictor and MGARCH perform similarly.<h3>Suggested Citation</h3>Kasper Johansson, Mehmet G. Ogut, Markus Pelger, Thomas Schmelzer and Stephen Boyd (2023), "A Simple Method for Predicting Covariance Matrices of Financial Returns", Foundations and Trends® in Econometrics: Vol. 12: No. 4, pp 324-407. http://dx.doi.org/10.1561/0800000047Tue, 21 Nov 2023 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-046A Complete Framework for Model-Free Difference-in-Differences Estimation<h3>Abstract</h3>We propose a complete framework for data-driven
difference-in-differences analysis with covariates,
in particular nonparametric
estimation and testing. We start with simultaneously
choosing confounders and a scale of the outcome along identification
conditions. We estimate first heterogeneous treatment
effects stratified along the covariates, then the average
effect(s) for the treated. We provide the asymptotic and
finite sample behavior of our estimators and tests, bootstrap
procedures for their standard errors and p-values, and an
automatic bandwidth choice. The pertinence of our methods
is shown with a study of the impact of the Deferred Action
for Childhood Arrivals program on educational outcomes
for non-citizen immigrants in the US.<h3>Suggested Citation</h3>Daniel J. Henderson and Stefan Sperlich (2023), "A Complete Framework for Model-Free Difference-in-Differences Estimation", Foundations and Trends® in Econometrics: Vol. 12: No. 3, pp 232-323. http://dx.doi.org/10.1561/0800000046Thu, 12 Oct 2023 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-039Factor Extraction in Dynamic Factor Models: Kalman Filter Versus Principal Components<h3>Abstract</h3>This survey looks at the literature on factor extraction in the context of Dynamic Factor Models (DFMs) fitted to multivariate systems of economic and financial variables. Many of the most popular factor extraction procedures often used in empirical applications are based on either Principal Components (PC) or Kalman filter and smoothing (KFS) techniques. First, we show that the KFS factors are a weighted average of the contemporaneous information (PC factors) and the past information and that the weights of the latter are negligible unless the factors are close to the non-stationarity boundary and/or their loadings are pretty small when compared with the variance-covariance matrix of the idiosyncratic components. Note that the weight of the past can be large either because the cross-sectional dimension is small or because the magnitude of the factor loadings is small. Consequently, we are able to explain why, in practice, there is a general consensus about PC and KFS factors being rather similar when extracted from stationary systems of large dimensions. Second, we survey how PC and KFS deal with several issues often faced in the context of extracting factors from real data systems. In particular, we describe PC and KFS procedures to deal with mixed frequencies and missing observations, structural breaks, non-stationarity, Markov-switching parameters or multi-level factor structures. In general, we see that KFS is very flexible to deal with these issues.<h3>Suggested Citation</h3>Esther Ruiz and Pilar Poncela (2022), "Factor Extraction in Dynamic Factor Models: Kalman Filter Versus Principal Components", Foundations and Trends® in Econometrics: Vol. 12: No. 2, pp 121-231. http://dx.doi.org/10.1561/0800000039Wed, 30 Nov 2022 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-042Quantile Methods for Stochastic Frontier Analysis<h3>Abstract</h3>Quantile regression has become one of the standard tools of econometrics. We examine its compatibility with the special goals of stochastic frontier analysis. We document several conflicts between quantile regression and stochastic frontier analysis. From there we review what has been done up to now, we propose ways to overcome the conflicts that exist, and we develop new tools to do applied efficiency analysis using quantile methods in the context of stochastic frontier models. The work includes an empirical illustration to reify the issues and methods discussed, and catalogs the many open issues and topics for future research.<h3>Suggested Citation</h3>Alecos Papadopoulos and Christopher F. Parmeter (2022), "Quantile Methods for Stochastic Frontier Analysis", Foundations and Trends® in Econometrics: Vol. 12: No. 1, pp 1-120. http://dx.doi.org/10.1561/0800000042Mon, 07 Nov 2022 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-041Bayesian Approaches to Shrinkage and Sparse Estimation<h3>Abstract</h3>In all areas of human knowledge, datasets are increasing in both size and complexity, creating the need for richer statistical models. This trend is also true for economic data, where high-dimensional and nonlinear/nonparametric inference is the norm in several fields of applied econometric work. The purpose of this monograph is to introduce the reader to the world of Bayesian model determination, by surveying modern shrinkage and variable selection algorithms and methodologies. Bayesian inference is a natural probabilistic framework for quantifying uncertainty and learning about model parameters, and this feature is particularly important for inference in modern models of high dimensions and increased complexity.<p>We begin with a linear regression setting in order to introduce various classes of priors that lead to shrinkage/sparse estimators of comparable value to popular penalized likelihood estimators (e.g., ridge, LASSO). We explore various methods of exact and approximate inference, and discuss their pros and cons. Finally, we explore how priors developed for the simple regression setting can be extended in a straightforward way to various classes of interesting econometric models. In particular, the following case-studies are considered, that demonstrate application of Bayesian shrinkage and variable selection strategies to popular econometric contexts: (i) vector autoregressive models; (ii) factor models; (iii) time-varying parameter regressions; (iv) confounder selection in treatment effects models; and (v) quantile regression models. A MATLAB package and an accompanying technical manual allow the reader to replicate many of the algorithms described in this monograph.</p><h3>Suggested Citation</h3>Dimitris Korobilis and Kenichi Shimizu (2022), "Bayesian Approaches to Shrinkage and Sparse Estimation", Foundations and Trends® in Econometrics: Vol. 11: No. 4, pp 230-354. http://dx.doi.org/10.1561/0800000041Wed, 29 Jun 2022 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-034Performance Analysis: Economic Foundations and Trends<h3>Abstract</h3>The goal of this monograph is to very concisely outline the economic theory foundations and trends of the field of Effciency and Productivity Analysis, also sometimes referred to as Performance Analysis. I start with the profit maximization paradigm of mainstream economics, use it to derive a general profit effciency measure and then present its special cases: revenue maximization and revenue effciency, cost minimization and cost effciency. I then consider various types of technical and allocative effciencies (directional and Shephard’s distance functions and related Debreu–Farrell measures as well as non-directional measures of technical effciency), showing how they fit into or decompose the profit maximization paradigm. I then cast the effciency and productivity concepts in a dynamic perspective that is frequently used to analyze the productivity changes of economic systems (firms, hospitals, banks, countries, etc.) over time. I conclude this monograph with an overview of major results on aggregation in productivity and effciency analysis, where the aggregate productivity and effciency measures are theoretically connected to their individual analogues.<h3>Suggested Citation</h3>Valentin Zelenyuk (2021), "Performance Analysis: Economic Foundations and Trends", Foundations and Trends® in Econometrics: Vol. 11: No. 3, pp 153-229. http://dx.doi.org/10.1561/0800000034Mon, 20 Sep 2021 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-035Experimetrics: A Survey<h3>Abstract</h3>This monograph aims to survey a range of econometric techniques that are currently being used by experimental economists. It is likely to be of interest both to experimental economists who are keen to expand their skill sets, and also the wider econometrics community who may be interested to learn the sort of econometric techniques that are currently being used by Experimentalists. Techniques covered range from the simple to the fairly advanced. The monograph starts with an overview of treatment testing. A range of treatment tests will be illustrated using the example of a dictator-game giving experiment in which there is a communication treatment. Standard parametric and non-parametric treatment tests, tests comparing entire distributions, and bootstrap tests will all be covered. It will then be demonstrated that treatment tests can be performed in a regression framework, and the important concept of clustering will be explained. The multilevel modelling framework will also be covered, as a means of dealing with more than one level of clustering. Power analysis will be covered from both theoretical and practical perspectives, as a means of determining the sample size required to attain a given power, and also as a means of computing ex-post power for a reported test. We then progress to a discussion of different data types arising in Experimental Economics (binary, ordinal, interval, etc.), and how to deal with them. We then consider the estimation of fully structural models, with particular attention paid to the estimation of social preference parameters from dictator game data, and risky choice models with between-subject heterogeneity in risk aversion. The method maximum simulated likelihood (MSL) is promoted as the most suitable method for estimating models with continuous heterogeneity. We then consider finite mixture models as a way of capturing discrete heterogeneity; that is, when the population of subjects divides into a small number of distinct types. The application used as an example will be the level-<em>k</em> model, in which subject types are defined by their levels of reasoning. We then consider other models of behaviour in games, including the Quantal Response Equilibrium (QRE) Model. The final area covered is models of learning in games.<h3>Suggested Citation</h3>Peter G. Moffatt (2021), "Experimetrics: A Survey", Foundations and Trends® in Econometrics: Vol. 11: No. 1–2, pp 1-152. http://dx.doi.org/10.1561/0800000035Mon, 15 Feb 2021 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-037Climate Econometrics: An Overview<h3>Abstract</h3>Climate econometrics is a new sub-discipline that has grown rapidly over the last few years. As greenhouse gas emissions like carbon dioxide (CO<sub>2</sub>), nitrous oxide (N<sub>2</sub>O) and methane (CH<sub>4</sub>) are a major cause of climate change, and are generated by human activity, it is not surprising that the tool set designed to empirically investigate economic outcomes should be applicable to studying many empirical aspects of climate change.<p>Economic and climate time series exhibit many commonalities. Both data are subject to non-stationarities in the form of evolving stochastic trends and sudden distributional shifts. Consequently, the well-developed machinery for modeling economic time series can be fruitfully applied to climate data. In both disciplines, we have imperfect and incomplete knowledge of the processes actually generating the data. As we don’t know that data generating process (DGP), we must search for what we hope is a close approximation to it. The data modeling approach adopted at Climate Econometrics (<a href="http://www.climateeconometrics.org/">http://www.climateeconometrics.org/</a>) is based on a model selection methodology that has excellent properties for locating an unknown DGP nested within a large set of possible explanations, including dynamics, outliers, shifts, and non-linearities. The software we use is a variant of machine learning which implements multi-path block searches commencing from very general specifications to discover a well-specified and undominated model of the processes under analysis. To do so requires implementing indicator saturation estimators designed to match the problem faced, such as impulse indicators for outliers, step indicators for location shifts, trend indicators for trend breaks, multiplicative indicators for parameter changes, and indicators specifically designed for more complex phenomena that have a common reaction ‘shape’ like the impacts of volcanic eruptions on temperature reconstructions. We also use combinations of these, inevitably entailing settings with more candidate variables than observations.</p><p>Having described these econometric tools, we take a brief excursion into climate science to provide the background to the later applications. By noting the Earth’s available atmosphere and water resources, we establish that humanity really can alter the climate, and is doing so in myriad ways. Then we relate past climate changes to the ‘great extinctions’ seen in the geological record. Following the Industrial Revolution in the mid-18th century, building on earlier advances in scientific, technological and medical knowledge, real income levels per capita have risen dramatically globally, many killer diseases have been tamed, and human longevity has approximately doubled. However, such beneficial developments have led to a global explosion in anthropogenic emissions of greenhouse gases. These are also subject to many relatively sudden shifts from major wars, crises, resource discoveries, technology and policy interventions. Consequently, stochastic trends, large shifts and numerous outliers must all be handled in practice to develop viable empirical models of climate phenomena. Additional advantages of our econometric methods for doing so are detecting the impacts of important policy interventions as well as improved forecasts. The econometric approach we outline can handle all these jointly, which is essential to accurately characterize non-stationary observational data. Few approaches in either climate or economic modeling consider all such effects jointly, but a failure to do so leads to mis-specified models and hence incorrect theory evaluation and policy analyses. We discuss the hazards of modeling wide-sense non-stationary data (namely data not just with stochastic trends but also distributional shifts), which also serves to describe our notation.</p><p>The application of the methods is illustrated by two detailed modeling exercises. The first investigates the causal role of CO<sub>2</sub> in Ice Ages, where a simultaneous-equations system is developed to characterize land ice volume, temperature and atmospheric CO<sub>2</sub> levels as non-linear functions of measures of the Earth’s orbital path round the Sun. The second turns to analyze the United Kingdom’s highly non-stationary annual CO<sub>2</sub> emissions over the last 150 years, walking through all the key modeling stages. As the first country into the Industrial Revolution, the UK is one of the first countries out, with per capita annual CO<sub>2</sub> emissions now below 1860’s levels when our data series begin, a reduction achieved with little aggregate cost. However, very large decreases in all greenhouse gas emissions are still required to meet the UK’s 2050 target set by its Climate Change Act in 2008 of an 80% reduction from 1970 levels, since reduced to a net zero target by that date, as required globally to stabilize temperatures. The rapidly decreasing costs of renewable energy technologies offer hope of further rapid emission reductions in that area, illustrated by a dynamic scenario analysis.</p><h3>Suggested Citation</h3>Jennifer L. Castle and David F. Hendry (2020), "Climate Econometrics: An Overview", Foundations and Trends® in Econometrics: Vol. 10: No. 3-4, pp 145-322. http://dx.doi.org/10.1561/0800000037Tue, 18 Aug 2020 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-036Foundations of Stated Preference Elicitation: Consumer Behavior and Choice-based Conjoint Analysis<h3>Abstract</h3>Stated preference elicitation methods collect data on consumers by "just asking" about tastes, perceptions, valuations, attitudes, motivations, life satisfactions, and/or intended choices. Choice-Based Conjoint (CBC) analysis asks subjects to make choices from hypothetical menus in experiments that are designed to mimic market experiences. Stated preference methods are controversial in economics, particularly for valuation of non-market goods, but CBC analysis is accepted and used widely in marketing and policy analysis. The promise of stated preference experiments is that they can provide deeper and broader data on the structure of consumer preferences than is obtainable from revealed market observations, with experimental control of the choice environment that circumvents the feedback found in real market equilibria. The risk is that they give pictures of consumers that do not predict real market behavior. It is important for both economists and non-economists to learn about the performance of stated preference elicitations and the conditions under which they can contribute to understanding consumer behavior and forecasting market demand. This monograph re-examines the discrete choice methods and stated preference elicitation procedures that are commonly used in CBC, and provides a guide to techniques for CBC data collection, model specification, estimation, and policy analysis. The aim is to clarify the domain of applicability and delineate the circumstances under which stated preference elicitations can provide reliable information on preferences.<h3>Suggested Citation</h3>Moshe Ben-Akiva, Daniel McFadden and Kenneth Train (2019), "Foundations of Stated Preference Elicitation: Consumer Behavior and Choice-based Conjoint Analysis", Foundations and Trends® in Econometrics: Vol. 10: No. 1-2, pp 1-144. http://dx.doi.org/10.1561/0800000036Mon, 28 Jan 2019 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-031Structural Econometrics of Auctions: A Review<h3>Abstract</h3>We review the literature concerned with the structural econometrics
of observational data from auctions, discussing the problems that have
been solved and highlighting those that remain unsolved as well as suggesting
areas for future research. Where appropriate, we discuss different
modeling choices as well as the fragility or robustness of different
methods.<h3>Suggested Citation</h3>Matthew L. Gentry, Timothy P. Hubbard, Denis Nekipelov and Harry J. Paarsch (2018), "Structural Econometrics of Auctions: A Review", Foundations and Trends® in Econometrics: Vol. 9: No. 2-4, pp 79-302. http://dx.doi.org/10.1561/0800000031Thu, 26 Apr 2018 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-033Data Visualization and Health Econometrics<h3>Abstract</h3>This article reviews econometric methods for health outcomes and health care costs that are used for prediction and forecasting, risk adjustment, resource allocation, technology assessment, and policy evaluation. It focuses on the principles and practical application of data visualization and statistical graphics and how these can enhance applied econometric analysis. Particular attention is devoted to methods for skewed and heavy-tailed distributions. Practical examples show how these methods can be applied to data on individual healthcare costs and health outcomes. Topics include: an introduction to data visualization; data description and regression; generalized linear models; flexible parametric models; semiparametric models; and an application to biomarkers.<h3>Suggested Citation</h3>Andrew M. Jones (2017), "Data Visualization and Health Econometrics", Foundations and Trends® in Econometrics: Vol. 9: No. 1, pp 1-78. http://dx.doi.org/10.1561/0800000033Thu, 31 Aug 2017 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-030Spatial Econometrics: A Broad View<h3>Abstract</h3>Spatial econometrics can be defined in a narrow and in a broader sense. In a narrow sense it refers to methods and techniques for the analysis of regression models using data observed within discrete portions of space such as countries or regions. In a broader sense it is inclusive of the models and theoretical instruments of spatial statistics and spatial data analysis to analyze various economic effects such as externalities, interactions, spatial concentration and many others. Indeed, the reference methodology for spatial econometrics lies on the advances in spatial statistics where it is customary to distinguish between different typologies of data that can be encountered in empirical cases and that require different modelling strategies. A first distinction is between continuous spatial data and data observed on a discrete space. Continuous spatial data are very common in many scientific disciplines (such as physics and environmental sciences), but are still not currently considered in the spatial econometrics literature. Discrete spatial data can take the form of points, lines and polygons. Point data refer to the position of the single economic agent observed at an individual level. Lines in space take the form of interactions between two spatial locations such as flows of goods, individuals and information. Finally data observed within polygons can take the form of predefined irregular portions of space, usually administrative partitions such as countries, regions or counties within one country.<p>In this monograph we will adopt a broader view of spatial econometrics and we will introduce some of the basic concepts and the fundamental distinctions needed to properly analyze economic datasets observed as points, regions or lines over space. It cannot be overlooked the fact that the mainstream spatial econometric literature was recently the subject for harsh and radical criticisms by a number of papers. The purpose of this monograph is to show that much of these criticisms are in fact well grounded, but that they lose relevance if we abandon the narrow paradigm of a discipline centered on the regression analysis of regional data, and we embrace the wider acceptation adopted here. In Section 2 we will introduce methods for the spatial econometric analysis of regional data that, so far, have been the workhorse of most theoretical and empirical work in the literature. We will consider modelling strategies falling within the general structure of the SARAR paradigm and its particularizations by presenting the various estimation and hypothesis testing procedures based on Maximum Likelihood (ML), Generalized Method of Moments (GMM) and Two-Stage Least Squares (2SLS), that were proposed in the literature to remove the ineffieciencies and inconsistencies arising from the presence of various forms of spatial dependence. Section 3 is devoted to the new emerging field of spatial econometric analysis of individual granular spatial data sometimes referred to as <em>spatial microeconometrics</em>. We present modelling strategies that use information about the actual position of each economic agent to explain both individuals' location decisions and the economic actions observed in the chosen locations. We will discuss the peculiarities of general spatial autoregressive model in this setting and the use of models where distances are used as predictors in a regression framework. We will also present some point pattern methods to model individuals' locational choices, as well as phenomena of co-localization and joint-localization. Finally in Section 4 the general SARAR paradigm is applied to the case of spatial interaction models estimated using data in the form of origin–destination variables and specified following models based on the analogy with the Newtonian law of universal gravitation. The discussion in this monograph is intentionally limited to the analysis of spatial data observed in a single moment of time leaving out of presentation the case of dynamic spatial data such as those observed in spatial panel data.</p><h3>Suggested Citation</h3>Giuseppe Arbia (2016), "Spatial Econometrics: A Broad View", Foundations and Trends® in Econometrics: Vol. 8: No. 3–4, pp 145-265. http://dx.doi.org/10.1561/0800000030Wed, 09 Nov 2016 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-026Bifurcation of Macroeconometric Models and Robustness of Dynamical Inferences<h3>Abstract</h3>In systems theory, it is well known that the parameter spaces of dynamical systems are stratified into bifurcation regions, with each supporting a different dynamical solution regime. Some can be stable, with different characteristics, such as monotonic stability, periodic damped stability, or multiperiodic damped stability, and some can be unstable, with different characteristics, such as periodic, multiperiodic, or chaotic unstable dynamics. But in general the existence of bifurcation boundaries is normal and should be expected from most dynamical systems, whether linear or nonlinear. Bifurcation boundaries in parameter space are not evidence of model defect. While existence of such bifurcation boundaries is well known in economic theory, econometricians using macroeconometric models rarely take bifurcation into consideration, when producing policy simulations from macroeconometrics models. Such models are routinely simulated only at the point estimates of the models' parameters.<p>Barnett and He [1999] explored bifurcation stratification of Bergstrom and Wymer's [1976] continuous time UK macroeconometric model. Bifurcation boundaries intersected the confidence region of the model's parameter estimates. Since then, Barnett and his coauthors have been conducting similar studies of many other newer macroeconometric models spanning all basic categories of those models. So far, they have not found a single case in which the model's parameter space was not subject to bifurcation stratification. In most cases, the confidence region of the parameter estimates were intersected by some of those bifurcation boundaries. The most fundamental implication of this research is that policy simulations with macroeconometric models should be conducted at multiple settings of the parameters within the confidence region. While this result would be as expected by systems theorists, the result contradicts the normal procedure in macroeconometrics of conducting policy simulations solely at the point estimates of the parameters.</p><p>This survey provides an overview of the classes of macroeconometric models for which these experiments have so far been run and emphasizes the implications for lack of robustness of conventional dynamical inferences from macroeconometric policy simulations. By making this detailed survey of past bifurcation experiments available, we hope to encourage and facilitate further research on this problem with other models and to emphasize the need for simulations at various points within the confidence regions of macroeconometric models, rather than at only point estimates.</p><h3>Suggested Citation</h3>William A. Barnett and Guo Chen (2015), "Bifurcation of Macroeconometric Models and Robustness of Dynamical Inferences", Foundations and Trends® in Econometrics: Vol. 8: No. 1–2, pp 1-144. http://dx.doi.org/10.1561/0800000026Wed, 30 Sep 2015 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-023Efficiency Analysis: A Primer on Recent Advances<h3>Abstract</h3>This monograph reviews the econometric literature on the estimation of stochastic frontiers and technical efficiency. Special attention is devoted to
current research.<h3>Suggested Citation</h3>Christopher F. Parmeter and Subal C. Kumbhakar (2014), "Efficiency Analysis: A Primer on Recent Advances", Foundations and Trends® in Econometrics: Vol. 7: No. 3–4, pp 191-385. http://dx.doi.org/10.1561/0800000023Thu, 18 Dec 2014 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-028Choosing the More Likely Hypothesis<h3>Abstract</h3>Much of economists' statistical work centers on testing hypotheses in which parameter values are partitioned between a null hypothesis and an alternative hypothesis in order to distinguish two views about the world. Our traditional procedures are based on the probabilities of a test statistic under the null but ignore what the statistics say about the probability of the test statistic under the alternative. Traditional procedures are not intended to provide evidence for the relative probabilities of the null versus alternative hypotheses, but are regularly treated as if they do. Unfortunately, when used to distinguish two views of the world, traditional procedures can lead to wildly misleading inference. In order to correctly distinguish between two views of the world, one needs to report the probabilities of the hypotheses given parameter estimates rather than the probability of the parameter estimates given the hypotheses. This monograph shows why failing to consider the alternative hypothesis often leads to incorrect conclusions. I show that for most standard econometric estimators, it is not difficult to compute the proper probabilities using Bayes theorem. Simple formulas that require only information already available in standard estimation reports are provided. I emphasize that frequentist approaches for deciding between the null and alternative hypothesis are not free of priors. Rather, the usual procedures involve an implicit, unstated prior that is likely to be far from scientifically neutral.<h3>Suggested Citation</h3>Richard Startz (2014), "Choosing the More Likely Hypothesis", Foundations and Trends® in Econometrics: Vol. 7: No. 2, pp 119-189. http://dx.doi.org/10.1561/0800000028Thu, 20 Nov 2014 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-022The Composite Marginal Likelihood (CML) Inference Approach with Applications to Discrete and Mixed Dependent Variable Models<h3>Abstract</h3>This monograph presents the basics of the composite marginal likelihood (CML) inference approach, discussing the asymptotic properties of the CML estimator and the advantages and limitations of the approach. The composite marginal likelihood (CML) inference approach is a relatively simple approach that can be used when the full likelihood function is practically infeasible to evaluate due to underlying complex dependencies. The history of the approach may be traced back to the pseudo-likelihood approach of Besag (1974) for modeling spatial data, and has found traction in a variety of fields since, including genetics, spatial statistics, longitudinal analyses, and multivariate modeling. However, the CML method has found little coverage in the econometrics field, especially in discrete choice modeling. This monograph fills this gap by identifying the value and potential applications of the method in discrete dependent variable modeling as well as mixed discrete and continuous dependent variable model systems. In particular, it develops a blueprint (complete with matrix notation) to apply the CML estimation technique to a wide variety of discrete and mixed dependent variable models.<h3>Suggested Citation</h3>Chandra R. Bhat (2014), "The Composite Marginal Likelihood (CML) Inference Approach with Applications to Discrete and Mixed Dependent Variable Models", Foundations and Trends® in Econometrics: Vol. 7: No. 1, pp 1-117. http://dx.doi.org/10.1561/0800000022Thu, 17 Jul 2014 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-019Semiparametric Efficiency Bounds for Microeconometric Models: A Survey<h3>Abstract</h3>In this survey, we evaluate estimators by comparing their asymptotic variances. The role of the efficiency bound, in this context, is to give a lower bound to the asymptotic variance of an estimator. An estimator with asymptotic variance equal to the efficiency bound can therefore be said to be asymptotically efficient. These bounds are also useful for understanding how the features of a given model affect the accuracy of parameter estimation.<h3>Suggested Citation</h3>Thomas A. Severini and Gautam Tripathi (2013), "Semiparametric Efficiency Bounds for Microeconometric Models: A Survey", Foundations and Trends® in Econometrics: Vol. 6: No. 3–4, pp 163-397. http://dx.doi.org/10.1561/0800000019Mon, 30 Dec 2013 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-018Short-term Forecasting for Empirical Economists: A Survey of the Recently Proposed Algorithms<h3>Abstract</h3>Practitioners do not always use research findings, sometimes because the research is not always conducted in a manner relevant to real-world practice. This survey seeks to close the gap between research and practice on short-term forecasting in real time. Towards this end, we review the most relevant recent contributions to the literature, examine their pros and cons, and we take the liberty of proposing some lines of future research. We include bridge equations, MIDAS, VARs, factor models and Markov-switching factor models, all allowing for mixed-frequency and ragged ends. Using the four constituent monthly series of the Stock–Watson coincident index, industrial production, employment, income and sales, we evaluate their empirical performance to forecast quarterly US GDP growth rates in real time. Finally, we review the main results regarding the number of predictors in factor based forecasts and how the selection of the more informative or representative variables can be made.<h3>Suggested Citation</h3>Maximo Camacho, Gabriel Perez-Quiros and Pilar Poncela (2013), "Short-term Forecasting for Empirical Economists: A Survey of the Recently Proposed Algorithms", Foundations and Trends® in Econometrics: Vol. 6: No. 2, pp 101-161. http://dx.doi.org/10.1561/0800000018Thu, 28 Nov 2013 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-017Inference in the Presence of Weak Instruments: A Selected Survey<h3>Abstract</h3>Here we present a selected survey in which we attempt to break down the ever burgeoning literature on inference in the presence of weak instruments into issues of estimation, hypothesis testing and confidence interval construction. Within this literature a variety of different approaches have been adopted and one of the contributions of this survey is to examine some of the links between them. The vehicle that we will use to establish these links will be the small concentration results of Poskitt and Skeels (2007), which can be used to characterize various special cases when instruments are weak. We make no attempt to provide an exhaustive survey of all of the literature related to weak instruments. Contributions along these lines can be found in, <em>inter alia</em>, Stock et al. (2002), Dufour (2003), Hahn and Hausman (2003), and Andrews and Stock (2007), and we view this survey as complementary to those earlier works.<h3>Suggested Citation</h3>D. S. Poskitt and C. L. Skeels (2013), "Inference in the Presence of Weak Instruments: A Selected Survey", Foundations and Trends® in Econometrics: Vol. 6: No. 1, pp 1-99. http://dx.doi.org/10.1561/0800000017Thu, 29 Aug 2013 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-020Estimation and Inference in Nonparametric Frontier Models: Recent Developments and Perspectives<h3>Abstract</h3>Nonparametric estimators are widely used to estimate the productive efficiency of firms and other organizations, but often without any attempt to make statistical inference. Recent work has provided statistical properties of these estimators as well as methods for making statistical inference, and a link between frontier estimation and extreme value theory has been established. New estimators that avoid many of the problems inherent with traditional efficiency estimators have also been developed; these new estimators are robust with respect to outliers and avoid the well-known curse of dimensionality. Statistical properties, including asymptotic distributions, of the new estimators have been uncovered. Finally, several approaches exist for introducing environmental variables into production models; both two-stage approaches, in which estimated efficiencies are regressed on environmental variables, and conditional efficiency measures, as well as the underlying assumptions required for either approach, are examined.<h3>Suggested Citation</h3>Léopold Simar and Paul W. Wilson (2013), "Estimation and Inference in Nonparametric Frontier Models: Recent Developments and Perspectives", Foundations and Trends® in Econometrics: Vol. 5: No. 3–4, pp 183-337. http://dx.doi.org/10.1561/0800000020Thu, 06 Jun 2013 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-011Monte Carlo Simulation for Econometricians<h3>Abstract</h3>Many studies in econometric theory are supplemented by Monte Carlo simulation investigations. These illustrate the properties of alternative inference techniques when applied to samples drawn from mostly entirely synthetic data generating processes. They should provide information on how techniques, which may be sound asymptotically, perform in finite samples and then unveil the effects of model characteristics too complex to analyze analytically. Also the interpretation of applied studies should often benefit when supplemented by a dedicated simulation study, based on a design inspired by the postulated actual empirical data generating process, which would come close to bootstrapping. This review presents and illustrates the fundamentals of conceiving and executing such simulation studies, especially synthetic but also more dedicated, focussing on controlling their accuracy, increasing their efficiency, recognizing their limitations, presenting their results in a coherent and palatable way, and on the appropriate interpretation of their actual findings, especially when the simulation study is used to rank the qualities of alternative inference techniques.<h3>Suggested Citation</h3>Jan F. Kiviet (2012), "Monte Carlo Simulation for Econometricians", Foundations and Trends® in Econometrics: Vol. 5: No. 1–2, pp 1-181. http://dx.doi.org/10.1561/0800000011Fri, 23 Mar 2012 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-016Collective Household Consumption Behavior: Revealed Preference Analysis<h3>Abstract</h3>We review a nonparametric "revealed preference" methodology for analyzing collective consumption behavior in practical applications. The methodology allows for accounting for externalities, public consumption, and the use of assignable quantity information in the consumption analysis. This provides a framework for empirically assessing welfare-related questions that are specific to the collective model of household consumption. As a first step, we discuss the testable necessary and sufficient conditions for data consistency with special cases of the collective model (e.g., the case with all goods publicly consumed, and the case with all goods privately consumed without externalities); these conditions can be checked by means of mixed integer (linear) programming (MIP) solution algorithms. Next, we focus on a testable necessary condition for the most general model in our setting (i.e., the case in which any good can be publicly consumed as well as privately consumed, possibly with externalities); again, this condition can be checked by means of MIP solution algorithms. Even though this general model imposes minimal structure <em>a priori</em>, we show that the MIP characterization allows for deriving bounds on the feasible income shares. Finally, we illustrate our methods by some empirical applications to data drawn from the Russian Longitudinal Monitoring Survey.<h3>Suggested Citation</h3>Laurens Cherchye, Bram De Rock and Frederic Vermeulen (2012), "Collective Household Consumption Behavior: Revealed Preference Analysis", Foundations and Trends® in Econometrics: Vol. 4: No. 4, pp 225-312. http://dx.doi.org/10.1561/0800000016Thu, 22 Mar 2012 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-014The Estimation of Causal Effects by Difference-in-Difference Methods<h3>Abstract</h3>This survey gives a brief overview of the literature on the difference-in-difference (DiD) estimation strategy and discusses major issues using a treatment effects perspective. In this sense, this survey gives a somewhat different view on DiD than the standard textbook discussion of the DiD model, but it will not be as complete as the latter. It contains some extensions of the literature, for example, a discussion of, and suggestions for nonlinear DiD estimators as well as DiD estimators based on propensity-score type matching methods.<h3>Suggested Citation</h3>Michael Lechner (2011), "The Estimation of Causal Effects by Difference-in-Difference Methods", Foundations and Trends® in Econometrics: Vol. 4: No. 3, pp 165-224. http://dx.doi.org/10.1561/0800000014Tue, 15 Nov 2011 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-015Estimation of Spatial Panels<h3>Abstract</h3>Spatial panel models have panel data structures to capture spatial interactions across spatial units and over time. There are static as well as dynamic models. This text provides some recent developments on the specification and estimation of such models. The first part will consider estimation for static models. The second part is devoted to the estimation for spatial dynamic panels, where both stable and unstable dynamic models with fixed effects will be considered.<p>For the estimation of a spatial panel model with individual fixed effects, in order to avoid the incidental parameter problem due to the presence of many individual fixed effects, a conditional likelihood or partial likelihood approach is desirable. For the model with both fixed individual and time effects with a large and long panel, a conditional likelihood might not exist, but a partial likelihood can be constructed. The partial likelihood approach can be generalized to spatial panel models with fixed effects and a space–time filter. If individual effects are independent of exogenous regressors, one may consider the random effects specification and its estimation. The likelihood function of a random effects model can be decomposed into the product of a partial likelihood function and that of a between equation. The underlying equation for the partial likelihood function can be regarded as a within equation. As a result, the random effects estimate is a pooling of the within and between estimates. A Hausman type specification test can be used for testing the random components specification vs. the fixed effects one. The between equation highlights distinctive specifications on random components in the literature.</p><p>For spatial dynamic panels, we focus on the estimation for models with fixed effects, when both the number of spatial units <em>n</em> and the number of time periods <em>T</em> are large. We consider both quasi-maximum likelihood (QML) and generalized method of moments (GMM) estimations. Asymptotic behavior of the estimators depends on the ratio of <em>T</em> relative to <em>n</em>. For the stable case, when <em>n</em> is asymptotically proportional to <em>T</em>, the QML estimator is $\sqrt{nT}$-consistent and asymptotically normal, but its limiting distribution is not properly centered. When <em>n</em> is large relative to <em>T</em>, the QML estimator is <em>T</em>-consistent and has a degenerate limiting distribution. Bias correction for the estimator is possible. When <em>T</em> grows faster than <em>n</em><sup>1/3</sup>, the bias corrected estimator yields a centered confidence interval. The <em>n</em> and <em>T</em> ratio requirement can be relaxed if individual effects are first eliminated by differencing and the resulting equation is then estimated by the GMM, where exogenous and predetermined variables can be used as instruments. We consider the use of linear and quadratic moment conditions, where the latter is specific for spatial dependence. A finite number of moment conditions with some optimum properties can be constructed. An alternative approach is to use separate moment conditions for each period, which gives rise to many moments estimation.</p><p>The remaining text considers estimation of spatial dynamic models with the presence of unit roots. The QML estimate of the dynamic coefficient is $\sqrt{nT^{3}}$-consistent and estimates of all other parameters are $\sqrt{nT}$-consistent, and all of them are asymptotically normal. There are cases that unit roots are generated by combined temporal and spatial correlations, and outcomes of spatial units are cointegrated. The asymptotics of the QML estimator under this spatial cointegration case can be analyzed by reparameterization. In the last part, we propose a data transformation resulting in a unified estimation approach, which can be applied to models regardless of whether the model is stable or not. A bias correction procedure is also available.</p><p>The estimation methods are illustrated with two relevant empirical studies, one on regional growth and the other on market integration.</p><h3>Suggested Citation</h3>Lung-fei Lee and Jihai Yu (2011), "Estimation of Spatial Panels", Foundations and Trends® in Econometrics: Vol. 4: No. 1–2, pp 1-164. http://dx.doi.org/10.1561/0800000015Fri, 15 Apr 2011 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-013Bayesian Multivariate Time Series Methods for Empirical Macroeconomics<h3>Abstract</h3>Macroeconomic practitioners frequently work with multivariate time series models such as VARs, factor augmented VARs as well as time-varying parameter versions of these models (including variants with multivariate stochastic volatility). These models have a large number of parameters and, thus, over-parameterization problems may arise. Bayesian methods have become increasingly popular as a way of overcoming these problems. In this monograph, we discuss VARs, factor augmented VARs and time-varying parameter extensions and show how Bayesian inference proceeds. Apart from the simplest of VARs, Bayesian inference requires the use of Markov chain Monte Carlo methods developed for state space models and we describe these algorithms. The focus is on the empirical macroeconomist and we offer advice on how to use these models and methods in practice and include empirical illustrations. A website provides Matlab code for carrying out Bayesian inference in these models.<h3>Suggested Citation</h3>Gary Koop and Dimitris Korobilis (2010), "Bayesian Multivariate Time Series Methods for Empirical Macroeconomics", Foundations and Trends® in Econometrics: Vol. 3: No. 4, pp 267-358. http://dx.doi.org/10.1561/0800000013Tue, 20 Jul 2010 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-010Dealing with Endogeneity in Regression Models with Dynamic Coefficients<h3>Abstract</h3>The purpose of this monograph is to present a unified econometric framework for dealing with the issues of endogeneity in Markovswitching models and time-varying parameter models, as developed by Kim (2004, 2006, 2009), Kim and Nelson (2006), Kim et al. (2008), and Kim and Kim (2009). While Cogley and Sargent (2002), Primiceri (2005), Sims and Zha (2006), and Sims et al. (2008) consider estimation of simultaneous equations models with stochastic coefficients as a system, we deal with the LIML (limited information maximum likelihood) estimation of a single equation of interest out of a simultaneous equations model. Our main focus is on the two-step estimation procedures based on the control function approach, and we show how the problem of generated regressors can be addressed in second-step regressions.<h3>Suggested Citation</h3>Kim Chang-Jin (2010), "Dealing with Endogeneity in Regression Models with Dynamic Coefficients", Foundations and Trends® in Econometrics: Vol. 3: No. 3, pp 165-266. http://dx.doi.org/10.1561/0800000010Mon, 07 Jun 2010 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-002Large Dimensional Factor Analysis<h3>Abstract</h3>Econometric analysis of large dimensional factor models has been a heavily researched topic in recent years. This review surveys the main theoretical results that relate to static factor models or dynamic factor models that can be cast in a static framework. Among the topics covered are how to determine the number of factors, how to conduct inference when estimated factors are used in regressions, how to assess the adequacy of observed variables as proxies for latent factors, how to exploit the estimated factors to test unit root tests and common trends, and how to estimate panel cointegration models. The fundamental result that justifies these analyses is that the method of asymptotic principal components consistently estimates the true factor space. We use simulations to better understand the conditions that can affect the precision of the factor estimates.<h3>Suggested Citation</h3>Jushan Bai and Serena Ng (2008), "Large Dimensional Factor Analysis", Foundations and Trends® in Econometrics: Vol. 3: No. 2, pp 89-163. http://dx.doi.org/10.1561/0800000002Thu, 05 Jun 2008 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-009Nonparametric Econometrics: A Primer<h3>Abstract</h3>This review is a primer for those who wish to familiarize themselves with nonparametric econometrics. Though the underlying theory for many of these methods can be daunting for some practitioners, this article will demonstrate how a range of nonparametric methods can in fact be deployed in a fairly straightforward manner. Rather than aiming for encyclopedic coverage of the field, we shall restrict attention to a set of touchstone topics while making liberal use of examples for illustrative purposes. We will emphasize settings in which the user may wish to model a dataset comprised of continuous, discrete, or categorical data (nominal or ordinal), or any combination thereof. We shall also consider recent developments in which some of the variables involved may in fact be irrelevant, which alters the behavior of the estimators and optimal bandwidths in a manner that deviates substantially from conventional approaches.<h3>Suggested Citation</h3>Jeffrey S. Racine (2008), "Nonparametric Econometrics: A Primer", Foundations and Trends® in Econometrics: Vol. 3: No. 1, pp 1-88. http://dx.doi.org/10.1561/0800000009Sat, 01 Mar 2008 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-004Information and Entropy Econometrics — A Review and Synthesis<h3>Abstract</h3>The overall objectives of this review and synthesis are to study the basics of information-theoretic methods in econometrics, to examine the connecting theme among these methods, and to provide a more detailed summary and synthesis of the sub-class of methods that treat the observed sample moments as stochastic. Within the above objectives, this review focuses on studying the inter-connection between information theory, estimation, and inference. To achieve these objectives, it provides a detailed survey of information-theoretic concepts and quantities used within econometrics. It also illustrates the use of these concepts and quantities within the subfield of information and entropy econometrics while paying special attention to the interpretation of these quantities. The relationships between information-theoretic estimators and traditional estimators are discussed throughout the survey. This synthesis shows that in many cases information-theoretic concepts can be incorporated within the traditional likelihood approach and provide additional insights into the data processing and the resulting inference.<h3>Suggested Citation</h3>Amos Golan (2008), "Information and Entropy Econometrics — A Review and Synthesis", Foundations and Trends® in Econometrics: Vol. 2: No. 1–2, pp 1-145. http://dx.doi.org/10.1561/0800000004Tue, 26 Feb 2008 00:00:00 +0100http://www.nowpublishers.com/article/Details/ECO-008Functional Form and Heterogeneity in Models for Count Data<h3>Abstract</h3>This study presents several extensions of the most familiar models for count data, the Poisson and negative binomial models. We develop an encompassing model for two well-known variants of the negative binomial model (the NB1 and NB2 forms). We then analyze some alternative approaches to the standard log gamma model for introducing heterogeneity into the loglinear conditional means for these models. The lognormal model provides a versatile alternative specification that is more flexible (and more natural) than the log gamma form, and provides a platform for several "two part" extensions, including zero inflation, hurdle, and sample selection models. (We briefly present some alternative approaches to modeling heterogeneity.) We also resolve some features in Hausman, Hall and Griliches (1984, Economic models for count data with an application to the patents–R&D relationship, <em>Econometrica</em><strong>52</strong>, 909–938) widely used panel data treatments for the Poisson and negative binomial models that appear to conflict with more familiar models of fixed and random effects. Finally, we consider a bivariate Poisson model that is also based on the lognormal heterogeneity model. Two recent applications have used this model. We suggest that the correlation estimated in their model frameworks is an ambiguous measure of the correlation of the variables of interest, and may substantially overstate it. We conclude with a detailed application of the proposed methods using the data employed in one of the two aforementioned bivariate Poisson studies.<h3>Suggested Citation</h3>William Greene (2007), "Functional Form and Heterogeneity in Models for Count Data", Foundations and Trends® in Econometrics: Vol. 1: No. 2, pp 113-218. http://dx.doi.org/10.1561/0800000008Wed, 08 Aug 2007 00:00:00 +0200http://www.nowpublishers.com/article/Details/ECO-005Copula Modeling: An Introduction for Practitioners<h3>Abstract</h3>This article explores the copula approach for econometric modeling of joint parametric distributions. Although theoretical foundations of copulas are complex, this paper demonstrates that practical implementation and estimation are relatively straightforward. An attractive feature of parametrically specified copulas is that estimation and inference are based on standard maximum likelihood procedures, and thus copulas can be estimated using desktop econometric software. This represents a substantial advantage of copulas over recently proposed simulation-based approaches to joint modeling.<h3>Suggested Citation</h3>Pravin K. Trivedi and David M. Zimmer (2007), "Copula Modeling: An Introduction for Practitioners", Foundations and Trends® in Econometrics: Vol. 1: No. 1, pp 1-111. http://dx.doi.org/10.1561/0800000005Wed, 25 Apr 2007 00:00:00 +0200