Abstract: AbstractThis article reviews econometric methods for health outcomes and health care costs that are used for prediction and forecasting, risk adjustment, resource allocation, technology assessment, and policy evaluation. It focuses on the principles and practical application of data visualization and statistical graphics and how these can enhance applied econometric analysis. Particular attention is devoted to methods for skewed and heavy-tailed distributions. Practical examples show how these methods can be applied to data on individual healthcare costs and health outcomes. Topics include: an introduction to data visualization; data description and regression; generalized linear models; flexible parametric models; semiparametric models; and an application to biomarkers.Suggested CitationAndrew M. Jones (2017), "Data Visualization and Health Econometrics", Foundations and Trends® in Econometrics: Vol. 9: No. 1, pp 1-78. http://dx.doi.org/10.1561/0800000033 PubDate: Thu, 31 Aug 2017 00:00:00 +020

Abstract: AbstractSpatial econometrics can be defined in a narrow and in a broader sense. In a narrow sense it refers to methods and techniques for the analysis of regression models using data observed within discrete portions of space such as countries or regions. In a broader sense it is inclusive of the models and theoretical instruments of spatial statistics and spatial data analysis to analyze various economic effects such as externalities, interactions, spatial concentration and many others. Indeed, the reference methodology for spatial econometrics lies on the advances in spatial statistics where it is customary to distinguish between different typologies of data that can be encountered in empirical cases and that require different modelling strategies. A first distinction is between continuous spatial data and data observed on a discrete space. Continuous spatial data are very common in many scientific disciplines (such as physics and environmental sciences), but are still not currently considered in the spatial econometrics literature. Discrete spatial data can take the form of points, lines and polygons. Point data refer to the position of the single economic agent observed at an individual level. Lines in space take the form of interactions between two spatial locations such as flows of goods, individuals and information. Finally data observed within polygons can take the form of predefined irregular portions of space, usually administrative partitions such as countries, regions or counties within one country.In this monograph we will adopt a broader view of spatial econometrics and we will introduce some of the basic concepts and the fundamental distinctions needed to properly analyze economic datasets observed as points, regions or lines over space. It cannot be overlooked the fact that the mainstream spatial econometric literature was recently the subject for harsh and radical criticisms by a number of papers. The purpose of this monograph is to show that much of these criticisms are in fact well grounded, but that they lose relevance if we abandon the narrow paradigm of a discipline centered on the regression analysis of regional data, and we embrace the wider acceptation adopted here. In Section 2 we will introduce methods for the spatial econometric analysis of regional data that, so far, have been the workhorse of most theoretical and empirical work in the literature. We will consider modelling strategies falling within the general structure of the SARAR paradigm and its particularizations by presenting the various estimation and hypothesis testing procedures based on Maximum Likelihood (ML), Generalized Method of Moments (GMM) and Two-Stage Least Squares (2SLS), that were proposed in the literature to remove the ineffieciencies and inconsistencies arising from the presence of various forms of spatial dependence. Section 3 is devoted to the new emerging field of spatial econometric analysis of individual granular spatial data sometimes referred to as spatial microeconometrics. We present modelling strategies that use information about the actual position of each economic agent to explain both individuals' location decisions and the economic actions observed in the chosen locations. We will discuss the peculiarities of general spatial autoregressive model in this setting and the use of models where distances are used as predictors in a regression framework. We will also present some point pattern methods to model individuals' locational choices, as well as phenomena of co-localization and joint-localization. Finally in Section 4 the general SARAR paradigm is applied to the case of spatial interaction models estimated using data in the form of origin–destination variables and specified following models based on the analogy with the Newtonian law of universal gravitation. The discussion in this monograph is intentionally limited to the analysis of spatial data observed in a single moment of time leaving out of presentation the case of dynamic spatial data such as those observed in spatial panel data.Suggested CitationGiuseppe Arbia (2016), "Spatial Econometrics: A Broad View", Foundations and Trends® in Econometrics: Vol. 8: No. 3–4, pp 145-265. http://dx.doi.org/10.1561/0800000030 PubDate: Wed, 09 Nov 2016 00:00:00 +010

Abstract: AbstractIn systems theory, it is well known that the parameter spaces of dynamical systems are stratified into bifurcation regions, with each supporting a different dynamical solution regime. Some can be stable, with different characteristics, such as monotonic stability, periodic damped stability, or multiperiodic damped stability, and some can be unstable, with different characteristics, such as periodic, multiperiodic, or chaotic unstable dynamics. But in general the existence of bifurcation boundaries is normal and should be expected from most dynamical systems, whether linear or nonlinear. Bifurcation boundaries in parameter space are not evidence of model defect. While existence of such bifurcation boundaries is well known in economic theory, econometricians using macroeconometric models rarely take bifurcation into consideration, when producing policy simulations from macroeconometrics models. Such models are routinely simulated only at the point estimates of the models' parameters.Barnett and He [1999] explored bifurcation stratification of Bergstrom and Wymer's [1976] continuous time UK macroeconometric model. Bifurcation boundaries intersected the confidence region of the model's parameter estimates. Since then, Barnett and his coauthors have been conducting similar studies of many other newer macroeconometric models spanning all basic categories of those models. So far, they have not found a single case in which the model's parameter space was not subject to bifurcation stratification. In most cases, the confidence region of the parameter estimates were intersected by some of those bifurcation boundaries. The most fundamental implication of this research is that policy simulations with macroeconometric models should be conducted at multiple settings of the parameters within the confidence region. While this result would be as expected by systems theorists, the result contradicts the normal procedure in macroeconometrics of conducting policy simulations solely at the point estimates of the parameters.This survey provides an overview of the classes of macroeconometric models for which these experiments have so far been run and emphasizes the implications for lack of robustness of conventional dynamical inferences from macroeconometric policy simulations. By making this detailed survey of past bifurcation experiments available, we hope to encourage and facilitate further research on this problem with other models and to emphasize the need for simulations at various points within the confidence regions of macroeconometric models, rather than at only point estimates.Suggested CitationWilliam A. Barnett and Guo Chen (2015), "Bifurcation of Macroeconometric Models and Robustness of Dynamical Inferences", Foundations and Trends® in Econometrics: Vol. 8: No. 1–2, pp 1-144. http://dx.doi.org/10.1561/0800000026 PubDate: Wed, 30 Sep 2015 00:00:00 +020

Abstract: AbstractThis monograph reviews the econometric literature on the estimation of stochastic frontiers and technical efficiency. Special attention is devoted to current research.Suggested CitationChristopher F. Parmeter and Subal C. Kumbhakar (2014), "Efficiency Analysis: A Primer on Recent Advances", Foundations and Trends® in Econometrics: Vol. 7: No. 3–4, pp 191-385. http://dx.doi.org/10.1561/0800000023 PubDate: Thu, 18 Dec 2014 00:00:00 +010

Abstract: AbstractMuch of economists' statistical work centers on testing hypotheses in which parameter values are partitioned between a null hypothesis and an alternative hypothesis in order to distinguish two views about the world. Our traditional procedures are based on the probabilities of a test statistic under the null but ignore what the statistics say about the probability of the test statistic under the alternative. Traditional procedures are not intended to provide evidence for the relative probabilities of the null versus alternative hypotheses, but are regularly treated as if they do. Unfortunately, when used to distinguish two views of the world, traditional procedures can lead to wildly misleading inference. In order to correctly distinguish between two views of the world, one needs to report the probabilities of the hypotheses given parameter estimates rather than the probability of the parameter estimates given the hypotheses. This monograph shows why failing to consider the alternative hypothesis often leads to incorrect conclusions. I show that for most standard econometric estimators, it is not difficult to compute the proper probabilities using Bayes theorem. Simple formulas that require only information already available in standard estimation reports are provided. I emphasize that frequentist approaches for deciding between the null and alternative hypothesis are not free of priors. Rather, the usual procedures involve an implicit, unstated prior that is likely to be far from scientifically neutral.Suggested CitationRichard Startz (2014), "Choosing the More Likely Hypothesis", Foundations and Trends® in Econometrics: Vol. 7: No. 2, pp 119-189. http://dx.doi.org/10.1561/0800000028 PubDate: Thu, 20 Nov 2014 00:00:00 +010

Abstract: AbstractThis monograph presents the basics of the composite marginal likelihood (CML) inference approach, discussing the asymptotic properties of the CML estimator and the advantages and limitations of the approach. The composite marginal likelihood (CML) inference approach is a relatively simple approach that can be used when the full likelihood function is practically infeasible to evaluate due to underlying complex dependencies. The history of the approach may be traced back to the pseudo-likelihood approach of Besag (1974) for modeling spatial data, and has found traction in a variety of fields since, including genetics, spatial statistics, longitudinal analyses, and multivariate modeling. However, the CML method has found little coverage in the econometrics field, especially in discrete choice modeling. This monograph fills this gap by identifying the value and potential applications of the method in discrete dependent variable modeling as well as mixed discrete and continuous dependent variable model systems. In particular, it develops a blueprint (complete with matrix notation) to apply the CML estimation technique to a wide variety of discrete and mixed dependent variable models.Suggested CitationChandra R. Bhat (2014), "The Composite Marginal Likelihood (CML) Inference Approach with Applications to Discrete and Mixed Dependent Variable Models", Foundations and Trends® in Econometrics: Vol. 7: No. 1, pp 1-117. http://dx.doi.org/10.1561/0800000022 PubDate: Thu, 17 Jul 2014 00:00:00 +020

Abstract: AbstractIn this survey, we evaluate estimators by comparing their asymptotic variances. The role of the efficiency bound, in this context, is to give a lower bound to the asymptotic variance of an estimator. An estimator with asymptotic variance equal to the efficiency bound can therefore be said to be asymptotically efficient. These bounds are also useful for understanding how the features of a given model affect the accuracy of parameter estimation.Suggested CitationThomas A. Severini and Gautam Tripathi (2013), "Semiparametric Efficiency Bounds for Microeconometric Models: A Survey", Foundations and Trends® in Econometrics: Vol. 6: No. 3–4, pp 163-397. http://dx.doi.org/10.1561/0800000019 PubDate: Mon, 30 Dec 2013 00:00:00 +010

Abstract: AbstractPractitioners do not always use research findings, sometimes because the research is not always conducted in a manner relevant to real-world practice. This survey seeks to close the gap between research and practice on short-term forecasting in real time. Towards this end, we review the most relevant recent contributions to the literature, examine their pros and cons, and we take the liberty of proposing some lines of future research. We include bridge equations, MIDAS, VARs, factor models and Markov-switching factor models, all allowing for mixed-frequency and ragged ends. Using the four constituent monthly series of the Stock–Watson coincident index, industrial production, employment, income and sales, we evaluate their empirical performance to forecast quarterly US GDP growth rates in real time. Finally, we review the main results regarding the number of predictors in factor based forecasts and how the selection of the more informative or representative variables can be made.Suggested CitationMaximo Camacho, Gabriel Perez-Quiros and Pilar Poncela (2013), "Short-term Forecasting for Empirical Economists: A Survey of the Recently Proposed Algorithms", Foundations and Trends® in Econometrics: Vol. 6: No. 2, pp 101-161. http://dx.doi.org/10.1561/0800000018 PubDate: Thu, 28 Nov 2013 00:00:00 +010

Abstract: AbstractHere we present a selected survey in which we attempt to break down the ever burgeoning literature on inference in the presence of weak instruments into issues of estimation, hypothesis testing and confidence interval construction. Within this literature a variety of different approaches have been adopted and one of the contributions of this survey is to examine some of the links between them. The vehicle that we will use to establish these links will be the small concentration results of Poskitt and Skeels (2007), which can be used to characterize various special cases when instruments are weak. We make no attempt to provide an exhaustive survey of all of the literature related to weak instruments. Contributions along these lines can be found in, inter alia, Stock et al. (2002), Dufour (2003), Hahn and Hausman (2003), and Andrews and Stock (2007), and we view this survey as complementary to those earlier works.Suggested CitationD. S. Poskitt and C. L. Skeels (2013), "Inference in the Presence of Weak Instruments: A Selected Survey", Foundations and Trends® in Econometrics: Vol. 6: No. 1, pp 1-99. http://dx.doi.org/10.1561/0800000017 PubDate: Thu, 29 Aug 2013 00:00:00 +020

Abstract: AbstractNonparametric estimators are widely used to estimate the productive efficiency of firms and other organizations, but often without any attempt to make statistical inference. Recent work has provided statistical properties of these estimators as well as methods for making statistical inference, and a link between frontier estimation and extreme value theory has been established. New estimators that avoid many of the problems inherent with traditional efficiency estimators have also been developed; these new estimators are robust with respect to outliers and avoid the well-known curse of dimensionality. Statistical properties, including asymptotic distributions, of the new estimators have been uncovered. Finally, several approaches exist for introducing environmental variables into production models; both two-stage approaches, in which estimated efficiencies are regressed on environmental variables, and conditional efficiency measures, as well as the underlying assumptions required for either approach, are examined.Suggested CitationLéopold Simar and Paul W. Wilson (2013), "Estimation and Inference in Nonparametric Frontier Models: Recent Developments and Perspectives", Foundations and Trends® in Econometrics: Vol. 5: No. 3–4, pp 183-337. http://dx.doi.org/10.1561/0800000020 PubDate: Thu, 06 Jun 2013 00:00:00 +020

Abstract: AbstractMany studies in econometric theory are supplemented by Monte Carlo simulation investigations. These illustrate the properties of alternative inference techniques when applied to samples drawn from mostly entirely synthetic data generating processes. They should provide information on how techniques, which may be sound asymptotically, perform in finite samples and then unveil the effects of model characteristics too complex to analyze analytically. Also the interpretation of applied studies should often benefit when supplemented by a dedicated simulation study, based on a design inspired by the postulated actual empirical data generating process, which would come close to bootstrapping. This review presents and illustrates the fundamentals of conceiving and executing such simulation studies, especially synthetic but also more dedicated, focussing on controlling their accuracy, increasing their efficiency, recognizing their limitations, presenting their results in a coherent and palatable way, and on the appropriate interpretation of their actual findings, especially when the simulation study is used to rank the qualities of alternative inference techniques.Suggested CitationJan F. Kiviet (2012), "Monte Carlo Simulation for Econometricians", Foundations and Trends® in Econometrics: Vol. 5: No. 1–2, pp 1-181. http://dx.doi.org/10.1561/0800000011 PubDate: Fri, 23 Mar 2012 00:00:00 +010

Abstract: AbstractWe review a nonparametric "revealed preference" methodology for analyzing collective consumption behavior in practical applications. The methodology allows for accounting for externalities, public consumption, and the use of assignable quantity information in the consumption analysis. This provides a framework for empirically assessing welfare-related questions that are specific to the collective model of household consumption. As a first step, we discuss the testable necessary and sufficient conditions for data consistency with special cases of the collective model (e.g., the case with all goods publicly consumed, and the case with all goods privately consumed without externalities); these conditions can be checked by means of mixed integer (linear) programming (MIP) solution algorithms. Next, we focus on a testable necessary condition for the most general model in our setting (i.e., the case in which any good can be publicly consumed as well as privately consumed, possibly with externalities); again, this condition can be checked by means of MIP solution algorithms. Even though this general model imposes minimal structure a priori, we show that the MIP characterization allows for deriving bounds on the feasible income shares. Finally, we illustrate our methods by some empirical applications to data drawn from the Russian Longitudinal Monitoring Survey.Suggested CitationLaurens Cherchye, Bram De Rock and Frederic Vermeulen (2012), "Collective Household Consumption Behavior: Revealed Preference Analysis", Foundations and Trends® in Econometrics: Vol. 4: No. 4, pp 225-312. http://dx.doi.org/10.1561/0800000016 PubDate: Thu, 22 Mar 2012 00:00:00 +010

Abstract: AbstractThis survey gives a brief overview of the literature on the difference-in-difference (DiD) estimation strategy and discusses major issues using a treatment effects perspective. In this sense, this survey gives a somewhat different view on DiD than the standard textbook discussion of the DiD model, but it will not be as complete as the latter. It contains some extensions of the literature, for example, a discussion of, and suggestions for nonlinear DiD estimators as well as DiD estimators based on propensity-score type matching methods.Suggested CitationMichael Lechner (2011), "The Estimation of Causal Effects by Difference-in-Difference Methods", Foundations and Trends® in Econometrics: Vol. 4: No. 3, pp 165-224. http://dx.doi.org/10.1561/0800000014 PubDate: Tue, 15 Nov 2011 00:00:00 +010

Abstract: AbstractSpatial panel models have panel data structures to capture spatial interactions across spatial units and over time. There are static as well as dynamic models. This text provides some recent developments on the specification and estimation of such models. The first part will consider estimation for static models. The second part is devoted to the estimation for spatial dynamic panels, where both stable and unstable dynamic models with fixed effects will be considered.For the estimation of a spatial panel model with individual fixed effects, in order to avoid the incidental parameter problem due to the presence of many individual fixed effects, a conditional likelihood or partial likelihood approach is desirable. For the model with both fixed individual and time effects with a large and long panel, a conditional likelihood might not exist, but a partial likelihood can be constructed. The partial likelihood approach can be generalized to spatial panel models with fixed effects and a space–time filter. If individual effects are independent of exogenous regressors, one may consider the random effects specification and its estimation. The likelihood function of a random effects model can be decomposed into the product of a partial likelihood function and that of a between equation. The underlying equation for the partial likelihood function can be regarded as a within equation. As a result, the random effects estimate is a pooling of the within and between estimates. A Hausman type specification test can be used for testing the random components specification vs. the fixed effects one. The between equation highlights distinctive specifications on random components in the literature.For spatial dynamic panels, we focus on the estimation for models with fixed effects, when both the number of spatial units n and the number of time periods T are large. We consider both quasi-maximum likelihood (QML) and generalized method of moments (GMM) estimations. Asymptotic behavior of the estimators depends on the ratio of T relative to n. For the stable case, when n is asymptotically proportional to T, the QML estimator is $\sqrt{nT}$-consistent and asymptotically normal, but its limiting distribution is not properly centered. When n is large relative to T, the QML estimator is T-consistent and has a degenerate limiting distribution. Bias correction for the estimator is possible. When T grows faster than n1/3, the bias corrected estimator yields a centered confidence interval. The n and T ratio requirement can be relaxed if individual effects are first eliminated by differencing and the resulting equation is then estimated by the GMM, where exogenous and predetermined variables can be used as instruments. We consider the use of linear and quadratic moment conditions, where the latter is specific for spatial dependence. A finite number of moment conditions with some optimum properties can be constructed. An alternative approach is to use separate moment conditions for each period, which gives rise to many moments estimation.The remaining text considers estimation of spatial dynamic models with the presence of unit roots. The QML estimate of the dynamic coefficient is $\sqrt{nT^{3}}$-consistent and estimates of all other parameters are $\sqrt{nT}$-consistent, and all of them are asymptotically normal. There are cases that unit roots are generated by combined temporal and spatial correlations, and outcomes of spatial units are cointegrated. The asymptotics of the QML estimator under this spatial cointegration case can be analyzed by reparameterization. In the last part, we propose a data transformation resulting in a unified estimation approach, which can be applied to models regardless of whether the model is stable or not. A bias correction procedure is also available.The estimation methods are illustrated with two relevant empirical studies, one on regional growth and the other on market integration.Suggested CitationLung-fei Lee and Jihai Yu (2011), "Estimation of Spatial Panels", Foundations and Trends® in Econometrics: Vol. 4: No. 1–2, pp 1-164. http://dx.doi.org/10.1561/0800000015 PubDate: Fri, 15 Apr 2011 00:00:00 +020

Abstract: AbstractMacroeconomic practitioners frequently work with multivariate time series models such as VARs, factor augmented VARs as well as time-varying parameter versions of these models (including variants with multivariate stochastic volatility). These models have a large number of parameters and, thus, over-parameterization problems may arise. Bayesian methods have become increasingly popular as a way of overcoming these problems. In this monograph, we discuss VARs, factor augmented VARs and time-varying parameter extensions and show how Bayesian inference proceeds. Apart from the simplest of VARs, Bayesian inference requires the use of Markov chain Monte Carlo methods developed for state space models and we describe these algorithms. The focus is on the empirical macroeconomist and we offer advice on how to use these models and methods in practice and include empirical illustrations. A website provides Matlab code for carrying out Bayesian inference in these models.Suggested CitationGary Koop and Dimitris Korobilis (2010), "Bayesian Multivariate Time Series Methods for Empirical Macroeconomics", Foundations and Trends® in Econometrics: Vol. 3: No. 4, pp 267-358. http://dx.doi.org/10.1561/0800000013 PubDate: Tue, 20 Jul 2010 00:00:00 +020

Abstract: AbstractThe purpose of this monograph is to present a unified econometric framework for dealing with the issues of endogeneity in Markovswitching models and time-varying parameter models, as developed by Kim (2004, 2006, 2009), Kim and Nelson (2006), Kim et al. (2008), and Kim and Kim (2009). While Cogley and Sargent (2002), Primiceri (2005), Sims and Zha (2006), and Sims et al. (2008) consider estimation of simultaneous equations models with stochastic coefficients as a system, we deal with the LIML (limited information maximum likelihood) estimation of a single equation of interest out of a simultaneous equations model. Our main focus is on the two-step estimation procedures based on the control function approach, and we show how the problem of generated regressors can be addressed in second-step regressions.Suggested CitationKim Chang-Jin (2010), "Dealing with Endogeneity in Regression Models with Dynamic Coefficients", Foundations and Trends® in Econometrics: Vol. 3: No. 3, pp 165-266. http://dx.doi.org/10.1561/0800000010 PubDate: Mon, 07 Jun 2010 00:00:00 +020

Abstract: AbstractEconometric analysis of large dimensional factor models has been a heavily researched topic in recent years. This review surveys the main theoretical results that relate to static factor models or dynamic factor models that can be cast in a static framework. Among the topics covered are how to determine the number of factors, how to conduct inference when estimated factors are used in regressions, how to assess the adequacy of observed variables as proxies for latent factors, how to exploit the estimated factors to test unit root tests and common trends, and how to estimate panel cointegration models. The fundamental result that justifies these analyses is that the method of asymptotic principal components consistently estimates the true factor space. We use simulations to better understand the conditions that can affect the precision of the factor estimates.Suggested CitationJushan Bai and Serena Ng (2008), "Large Dimensional Factor Analysis", Foundations and Trends® in Econometrics: Vol. 3: No. 2, pp 89-163. http://dx.doi.org/10.1561/0800000002 PubDate: Thu, 05 Jun 2008 00:00:00 +020

Abstract: AbstractThis review is a primer for those who wish to familiarize themselves with nonparametric econometrics. Though the underlying theory for many of these methods can be daunting for some practitioners, this article will demonstrate how a range of nonparametric methods can in fact be deployed in a fairly straightforward manner. Rather than aiming for encyclopedic coverage of the field, we shall restrict attention to a set of touchstone topics while making liberal use of examples for illustrative purposes. We will emphasize settings in which the user may wish to model a dataset comprised of continuous, discrete, or categorical data (nominal or ordinal), or any combination thereof. We shall also consider recent developments in which some of the variables involved may in fact be irrelevant, which alters the behavior of the estimators and optimal bandwidths in a manner that deviates substantially from conventional approaches.Suggested CitationJeffrey S. Racine (2008), "Nonparametric Econometrics: A Primer", Foundations and Trends® in Econometrics: Vol. 3: No. 1, pp 1-88. http://dx.doi.org/10.1561/0800000009 PubDate: Sat, 01 Mar 2008 00:00:00 +010

Abstract: AbstractThe overall objectives of this review and synthesis are to study the basics of information-theoretic methods in econometrics, to examine the connecting theme among these methods, and to provide a more detailed summary and synthesis of the sub-class of methods that treat the observed sample moments as stochastic. Within the above objectives, this review focuses on studying the inter-connection between information theory, estimation, and inference. To achieve these objectives, it provides a detailed survey of information-theoretic concepts and quantities used within econometrics. It also illustrates the use of these concepts and quantities within the subfield of information and entropy econometrics while paying special attention to the interpretation of these quantities. The relationships between information-theoretic estimators and traditional estimators are discussed throughout the survey. This synthesis shows that in many cases information-theoretic concepts can be incorporated within the traditional likelihood approach and provide additional insights into the data processing and the resulting inference.Suggested CitationAmos Golan (2008), "Information and Entropy Econometrics — A Review and Synthesis", Foundations and Trends® in Econometrics: Vol. 2: No. 1–2, pp 1-145. http://dx.doi.org/10.1561/0800000004 PubDate: Tue, 26 Feb 2008 00:00:00 +010

Abstract: AbstractThis study presents several extensions of the most familiar models for count data, the Poisson and negative binomial models. We develop an encompassing model for two well-known variants of the negative binomial model (the NB1 and NB2 forms). We then analyze some alternative approaches to the standard log gamma model for introducing heterogeneity into the loglinear conditional means for these models. The lognormal model provides a versatile alternative specification that is more flexible (and more natural) than the log gamma form, and provides a platform for several "two part" extensions, including zero inflation, hurdle, and sample selection models. (We briefly present some alternative approaches to modeling heterogeneity.) We also resolve some features in Hausman, Hall and Griliches (1984, Economic models for count data with an application to the patents–R&D relationship, Econometrica52, 909–938) widely used panel data treatments for the Poisson and negative binomial models that appear to conflict with more familiar models of fixed and random effects. Finally, we consider a bivariate Poisson model that is also based on the lognormal heterogeneity model. Two recent applications have used this model. We suggest that the correlation estimated in their model frameworks is an ambiguous measure of the correlation of the variables of interest, and may substantially overstate it. We conclude with a detailed application of the proposed methods using the data employed in one of the two aforementioned bivariate Poisson studies.Suggested CitationWilliam Greene (2007), "Functional Form and Heterogeneity in Models for Count Data", Foundations and Trends® in Econometrics: Vol. 1: No. 2, pp 113-218. http://dx.doi.org/10.1561/0800000008 PubDate: Wed, 08 Aug 2007 00:00:00 +020

Abstract: AbstractThis article explores the copula approach for econometric modeling of joint parametric distributions. Although theoretical foundations of copulas are complex, this paper demonstrates that practical implementation and estimation are relatively straightforward. An attractive feature of parametrically specified copulas is that estimation and inference are based on standard maximum likelihood procedures, and thus copulas can be estimated using desktop econometric software. This represents a substantial advantage of copulas over recently proposed simulation-based approaches to joint modeling.Suggested CitationPravin K. Trivedi and David M. Zimmer (2007), "Copula Modeling: An Introduction for Practitioners", Foundations and Trends® in Econometrics: Vol. 1: No. 1, pp 1-111. http://dx.doi.org/10.1561/0800000005 PubDate: Wed, 25 Apr 2007 00:00:00 +020