Results 1  10
of
166
Stochastic volatility: likelihood inference and comparison with ARCH models
 Review of Economic Studies
, 1998
"... In this paper, Markov chain Monte Carlo sampling methods are exploited to provide a unified, practical likelihoodbased framework for the analysis of stochastic volatility models. A highly effective method is developed that samples all the unobserved volatilities at once using an approximating offse ..."
Abstract

Cited by 592 (40 self)
 Add to MetaCart
In this paper, Markov chain Monte Carlo sampling methods are exploited to provide a unified, practical likelihoodbased framework for the analysis of stochastic volatility models. A highly effective method is developed that samples all the unobserved volatilities at once using an approximating offset mixture model, followed by an importance reweighting procedure. This approach is compared with several alternative methods using real data. The paper also develops simulationbased methods for filtering, likelihood evaluation and model failure diagnostics. The issue of model choice using nonnested likelihood ratios and Bayes factors is also investigated. These methods are used to compare the fit of stochastic volatility and GARCH models. All the procedures are illustrated in detail. 1.
Analysis of multivariate probit models
 BIOMETRIKA
, 1998
"... This paper provides a practical simulationbased Bayesian and nonBayesian analysis of correlated binary data using the multivariate probit model. The posterior distribution is simulated by Markov chain Monte Carlo methods and maximum likelihood estimates are obtained by a Monte Carlo version of the ..."
Abstract

Cited by 183 (13 self)
 Add to MetaCart
This paper provides a practical simulationbased Bayesian and nonBayesian analysis of correlated binary data using the multivariate probit model. The posterior distribution is simulated by Markov chain Monte Carlo methods and maximum likelihood estimates are obtained by a Monte Carlo version of the EM algorithm. A practical approach for the computation of Bayes factors from the simulation output is also developed. The methods are applied to a dataset with a bivariate binary response, to a fouryear longitudinal dataset from the Six Cities study of the health effects of air pollution and to a sevenvariate binary response dataset on the labour supply of married women from the Panel Survey of Income Dynamics.
Bayesian Treatment of the Independent Studentt Linear Model
 JOURNAL OF APPLIED ECONOMETRICS
, 1993
"... This article takes up methods for Bayesian inference in a linear model in which the disturbances are independent and have identical Studentt distributions. It exploits the equivalence of the Studentt distribution and an appropriate scale mixture of normals, and uses a Gibbs sampler to perform the ..."
Abstract

Cited by 128 (2 self)
 Add to MetaCart
This article takes up methods for Bayesian inference in a linear model in which the disturbances are independent and have identical Studentt distributions. It exploits the equivalence of the Studentt distribution and an appropriate scale mixture of normals, and uses a Gibbs sampler to perform the computations. The new method is applied to some wellknown macroeconomic time series. It is found that posterior odds ratios favor the independent Studentt linear model over the normal linear model, and that the posterior odds ratio in favor of difference stationarity over trend stationarity is often substantially less in the favored Studentt models.
A Bayesian analysis of the multinomial probit model with . . .
 Journal of Econometrics
, 2000
"... We present a new prior and corresponding algorithm for Bayesian analysis of the multinomial probit model. Our new approach places a prior directly on the identi"ed parameter space. The key is the speci"cation of a prior on the covariance matrix so that the (1,1) element if "xed at 1 a ..."
Abstract

Cited by 74 (1 self)
 Add to MetaCart
We present a new prior and corresponding algorithm for Bayesian analysis of the multinomial probit model. Our new approach places a prior directly on the identi"ed parameter space. The key is the speci"cation of a prior on the covariance matrix so that the (1,1) element if "xed at 1 and it is possible to draw from the posterior using standard distributions. Analytical results are derived which can be used to aid in assessment of the prior. # 2000 Elsevier Science S.A. All rights reserved.
Alternative Computational Approaches to Inference in the Multinomial Probit Model
 Review of Economics and Statistics
, 1994
"... AbstractThis research compares several approaches to inference in the multinomial probit model, based on two Monte Carlo experiments for a seven choice model. The methods compared are the simulated maximum likelihood estimator using the GHK recursive probability,simulator, the method of simulated ..."
Abstract

Cited by 67 (2 self)
 Add to MetaCart
(Show Context)
AbstractThis research compares several approaches to inference in the multinomial probit model, based on two Monte Carlo experiments for a seven choice model. The methods compared are the simulated maximum likelihood estimator using the GHK recursive probability,simulator, the method of simulated moments estimator using the GHK recursive simulator and kernelsmoothed frequency simulators, and posterior means using a Gibbs samplingdata augmentation algorithm. Overall, the Gibbs sampling algorithm has a slight edge, with the relative performance of MSM and SML based on the GHK simulator being difficult to evaluate. The MSM estimator with the kernelsmoothed frequency simulator is clearly inferior. I.
The Art of Data Augmentation
, 2001
"... The term data augmentation refers to methods for constructing iterative optimization or sampling algorithms via the introduction of unobserved data or latent variables. For deterministic algorithms,the method was popularizedin the general statistical community by the seminal article by Dempster, Lai ..."
Abstract

Cited by 58 (4 self)
 Add to MetaCart
The term data augmentation refers to methods for constructing iterative optimization or sampling algorithms via the introduction of unobserved data or latent variables. For deterministic algorithms,the method was popularizedin the general statistical community by the seminal article by Dempster, Laird, and Rubin on the EM algorithm for maximizing a likelihood function or, more generally, a posterior density. For stochastic algorithms, the method was popularized in the statistical literature by Tanner and Wong’s Data Augmentation algorithm for posteriorsampling and in the physics literatureby Swendsen and Wang’s algorithm for sampling from the Ising and Potts models and their generalizations; in the physics literature,the method of data augmentationis referred to as the method of auxiliary variables. Data augmentationschemes were used by Tanner and Wong to make simulation feasible and simple, while auxiliary variables were adopted by Swendsen and Wang to improve the speed of iterative simulation. In general,however, constructing data augmentation schemes that result in both simple and fast algorithms is a matter of art in that successful strategiesvary greatlywith the (observeddata) models being considered.After an overview of data augmentation/auxiliary variables and some recent developments in methods for constructing such
Customerspecific taste parameters and mixed logit, working paper
, 2000
"... Abstract: With flexible models of customers ’ choices among products and services, we estimate the tastes (partworths) of each sampled customer as well as the distribution of tastes in the population. First, maximum likelihood procedures are used to estimate the distribution of tastes in the popula ..."
Abstract

Cited by 55 (4 self)
 Add to MetaCart
(Show Context)
Abstract: With flexible models of customers ’ choices among products and services, we estimate the tastes (partworths) of each sampled customer as well as the distribution of tastes in the population. First, maximum likelihood procedures are used to estimate the distribution of tastes in the population using the pooled data for all sampled customers. Then, the distribution of tastes of each sampled customer is derived conditional on the observed data for that customer and the estimated population distribution of tastes (accounting for uncertainty in the population estimates.) The procedure provides the same type of information and is similar in spirit to hierarchical Bayes (HB.) The procedure is computationally attractive when it is easier to calculate the likelihood function for the population parameters than to draw from the posterior distribution of parameters as needed for HB. We apply the method to data on residential customers ’ choice among energy suppliers in conjointtype experiments. The estimated distribution of tastes provides practical information that is useful for suppliers in designing their offers. The conditioning for individual customers is found to differentiate customers effectively for marketing purposes and to improve considerably the predictions in new situations. Acknowledgements: We have benefited from comments and suggestions by Greg Allenby, Joel Huber, Rich Johnson, Daniel McFadden, and Peter Rossi. Of course, we alone are responsible for our representations and conclusions. The data for this analysis were collected by the Electric Power Research Institute (EPRI.) We are grateful to Ahmad Faruqui and EPRI for allowing us to use the data and present the results publicly. Andrew Goett and Kathleen Hudson, who had previously used these data, provided us data files in easily useable form, which saved us a considerable amount of time. For interested readers, software to estimate mixed logits is available free from Train’s web site at
Bayesian estimation of dynamic discrete choice models
, 2005
"... 6964. We are very grateful to the editor Costas Meghir and the anonymous referees for insightful ..."
Abstract

Cited by 53 (6 self)
 Add to MetaCart
6964. We are very grateful to the editor Costas Meghir and the anonymous referees for insightful
A Model for the Federal Funds Rate Target
 Journal of Political Economy
, 2000
"... This paper is a statistical analysis of the manner in which the Federal Reserve determines the level of the Federal funds rate target, one of the most publicized and anticipated economic indicators in the nancial world. The analysis presents two econometric challenges: (1) changes in the target are ..."
Abstract

Cited by 51 (1 self)
 Add to MetaCart
(Show Context)
This paper is a statistical analysis of the manner in which the Federal Reserve determines the level of the Federal funds rate target, one of the most publicized and anticipated economic indicators in the nancial world. The analysis presents two econometric challenges: (1) changes in the target are irregularly spaced in time; (2) the target is changed in discrete increments of 25 basis points. The contributions of this paper are: (1) to give a detailed account of the changing role of the target in the conduct of monetary policy; (2) to develop new econometric tools for analyzing timeseries duration data; (3) to analyze empirically the determinants of the target. The paper introduces a new class of models termed autoregressive conditional hazard processes, which allow one to produce dynamic forecasts of the probability of a target change. Conditional on a target change, an ordered probit model produces predictions of the magnitude by which the Fed will raise or lower the Federal funds ...
The Great Equalizer? Consumer Choice Behavior at Internet Shopbots
 SLOAN SCHOOL OF MANAGEMENT, MIT
, 2000
"... Our research empirically analyzes consumer behavior at Internet shopbots — sites that allow consumers to make “oneclick ” price comparisons for product offerings from multiple retailers. By allowing researchers to observe exactly what information the consumer is shown and their search behavior in r ..."
Abstract

Cited by 51 (0 self)
 Add to MetaCart
(Show Context)
Our research empirically analyzes consumer behavior at Internet shopbots — sites that allow consumers to make “oneclick ” price comparisons for product offerings from multiple retailers. By allowing researchers to observe exactly what information the consumer is shown and their search behavior in response to this information, shopbot data has unique strengths for analyzing consumer behavior. Furthermore, the method in which the data is displayed to consumers lends itself to a utilitybased evaluation process, consistent with econometric analysis techniques. While price is an important determinant of customer choice, we find that, even among shopbot consumers, branded retailers and retailers a consumer visited previously hold significant price advantages in headtohead price comparisons. Further, customers are very sensitive to how the total price is allocated among the item price, the shipping cost, and tax, and are also quite sensitive to the ordinal ranking of retailer offerings with respect to price. We also find that consumers use brand as a proxy for a retailer’s credibility with regard to noncontractible aspects of the product bundle such as shipping time. In each case our models accurately predict consumer behavior out of sample, suggesting