Results 1 
7 of
7
Vector multiplicative error models: Representation and inference
, 2006
"... The Multiplicative Error Model introduced by Engle (2002) for positive valued processes is specified as the product of a (conditionally autoregressive) scale factor and an innovation process with positive support. In this paper we propose a multivariate extension of such a model, by taking into con ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
The Multiplicative Error Model introduced by Engle (2002) for positive valued processes is specified as the product of a (conditionally autoregressive) scale factor and an innovation process with positive support. In this paper we propose a multivariate extension of such a model, by taking into consideration the possibility that the vector innovation process be contemporaneously correlated. The estimation procedure is hindered by the lack of probability density functions for multivariate positive valued random variables. We suggest the use of copula functions and of estimating equations to jointly estimate the parameters of the scale factors and of the correlations of the innovation processes. Empirical applications on volatility indicators are used to illustrate the gains over the equation by equation procedure. ∗We thank Christian T. Brownlees, Marco J. Lombardi and Margherita Velucchi for many discussions on MEMs and multivariate extensions, as well as participants in seminars at CORE and IGIER–Bocconi for
2012): “Semiparametric vector MEM
 Journal of Applied Econometrics
, 1002
"... In financial time series analysis we encounter several instances of non–negative valued processes (volumes, trades, durations, realized volatility, daily range, and so on) which exhibit clustering and can be modeled as the product of a vector of conditionally autoregressive scale factors and a multi ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
In financial time series analysis we encounter several instances of non–negative valued processes (volumes, trades, durations, realized volatility, daily range, and so on) which exhibit clustering and can be modeled as the product of a vector of conditionally autoregressive scale factors and a multivariate iid innovation process (vector Multiplicative Error Model). Two novel points are introduced in this paper relative to previous suggestions: a more general specification which sets this vector MEM apart from an equation by equation specification; and the adoption of a GMMbased approach which bypasses the complicated issue of specifying a general multivariate non–negative valued innovation process. A vMEM for volumes, number of trades and realized volatility reveals empirical support for a dynamically interdependent pattern of relationships among the variables on a number of NYSE stocks. ∗This paper develops some ideas introduced in Cipollini, Engle and Gallo (2006) where estimation was based in the framework of estimating functions. Without implicating, we acknowledge comments by Nour Meddahi and Kevin Sheppard which led us to present the Estimating Functions approach in a more familiar GMM notation. We acknowledge financial support from the Italian MIUR under grant PRIN
Contents lists available at ScienceDirect Journal of Econometrics
"... journal homepage: www.elsevier.com/locate/jeconom Maximum entropy autoregressive conditional heteroskedasticity model ✩ ..."
Abstract
 Add to MetaCart
journal homepage: www.elsevier.com/locate/jeconom Maximum entropy autoregressive conditional heteroskedasticity model ✩
Which Quantile is the Most Informative? Maximum Entropy Quantile Regression ∗
"... This paper studies the connections among quantile regression, the asymmetric Laplace distribution, and the maximum entropy. We show that the maximum likelihood problem is equivalent to the solution of a maximum entropy problem where we impose moment constraints given by the joint consideration of th ..."
Abstract
 Add to MetaCart
This paper studies the connections among quantile regression, the asymmetric Laplace distribution, and the maximum entropy. We show that the maximum likelihood problem is equivalent to the solution of a maximum entropy problem where we impose moment constraints given by the joint consideration of the mean and median. Using the resulting score functions we develop a maximum entropy quantile regression estimator. This approach delivers estimates for the slope parameters together with the associated “most informative” quantile. Similarly, this method can be seen as a penalized quantile regression estimator, where the penalty is given by deviations from the median regression. We derive the asymptotic properties of this estimator by showing consistency and asymptotic normality under certain regularity conditions. Finally, an application to the U.S. wage data to evaluate the effect of training on wages illustrates the usefulness and implementation of our methodology.
that full credit, including © notice, is given to the source. Vector Multiplicative Error Models: Representation and Inference
, 2006
"... on MEMs and multivariate extensions, as well as participants in seminars at CORE and IGIERBocconi for helpful comments. The usual disclaimer applies. The views expressed herein are those of the author(s) and do not necessarily reflect the views of the National Bureau of Economic Research. ..."
Abstract
 Add to MetaCart
on MEMs and multivariate extensions, as well as participants in seminars at CORE and IGIERBocconi for helpful comments. The usual disclaimer applies. The views expressed herein are those of the author(s) and do not necessarily reflect the views of the National Bureau of Economic Research.
Maximum Entropy Autoregressive Conditional Heteroskedasticity Model
"... In many applications, it has been found that the autoregressive conditional heteroskedasticity (ARCH) model under the conditional normal or Student’s t distributions are not general enough to account for the excess kurtosis in the data. Moreover, asymmetry in the financial data is rarely modeled i ..."
Abstract
 Add to MetaCart
In many applications, it has been found that the autoregressive conditional heteroskedasticity (ARCH) model under the conditional normal or Student’s t distributions are not general enough to account for the excess kurtosis in the data. Moreover, asymmetry in the financial data is rarely modeled in a systematic way. In this paper, we suggest a general density function based on the maximum entropy (ME) approach that takes account of asymmetry, excess kurtosis and also of high peakedness. The ME principle is based on the efficient use of available information, and as is well known, many of the standard family of distributions can be derived from the ME approach. We demonstrate how we can extract information functional from the data in the form of moment functions. We also propose a test procedure for selecting appropriate moment functions. Our procedure is illustrated with an application to the NYSE stock returns. The empirical results reveal that the ME approach with a fewer moment functions leads to a model that captures the stylized facts quite effectively.