Results 1 
5 of
5
Degrees of freedom and model selection in semiparametric additive monotone regression
 Journal of Multivariate Analysis
, 2013
"... Abstract The degrees of freedom of semiparametric additive monotone models are derived using results about projections onto sums of order cones. Two important related questions are also studied, namely, the definition of estimators for the parameter of the error term and the formulation of specific ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract The degrees of freedom of semiparametric additive monotone models are derived using results about projections onto sums of order cones. Two important related questions are also studied, namely, the definition of estimators for the parameter of the error term and the formulation of specific Akaike Information Criteria statistics. Several alternatives are proposed to solve both problems and simulation experiments are conducted to compare the behavior of the different candidates. A new selection criterion is proposed that combines the ability to guess the model but also the efficiency to estimate the variance parameter. Finally, the criterion is used to select the model in a regression problem from a well known data set.
Bayesian structure learning in graphical models
"... We consider the problem of estimating a sparse precision matrix of a multivariate Gaussian distribution, including the case where the dimension p exceeds the sample size n. Gaussian graphical models provide an important tool in describing conditional independence through presence or absence of the e ..."
Abstract
 Add to MetaCart
We consider the problem of estimating a sparse precision matrix of a multivariate Gaussian distribution, including the case where the dimension p exceeds the sample size n. Gaussian graphical models provide an important tool in describing conditional independence through presence or absence of the edges in the underlying graph. A popular nonBayesian method of estimating a graphical structure is given by the graphical lasso. In this paper, we consider a Bayesian approach to the problem. We use priors which put a mixture of a point mass at zero and certain absolutely continuous distribution on offdiagonal elements of the precision matrix. Hence the resulting posterior distribution can be used for graphical structure learning. The posterior convergence rate of the precision matrix is obtained. The posterior distribution of different graphical models is extremely cumbersome to compute. We propose a fast computational method for approximating the posterior probabilities of various graphs using the Laplace approximation method by expanding the posterior density around the posterior mode, which is the graphical lasso by our choice of the prior distribution. We also provide estimates of the accuracy in the approximation.
Bayesian estimation of a sparse precision matrix
"... We consider the problem of estimating a sparse precision matrix of a multivariate Gaussian distribution, including the case where the dimension p is large. Gaussian graphical models provide an important tool in describing conditional independence through presence or absence of the edges in the unde ..."
Abstract
 Add to MetaCart
We consider the problem of estimating a sparse precision matrix of a multivariate Gaussian distribution, including the case where the dimension p is large. Gaussian graphical models provide an important tool in describing conditional independence through presence or absence of the edges in the underlying graph. A popular nonBayesian method of estimating a graphical structure is given by the graphical lasso. In this paper, we consider a Bayesian approach to the problem. We use priors which put a mixture of a point mass at zero and certain absolutely continuous distribution on offdiagonal elements of the precision matrix. Hence the resulting posterior distribution can be used for graphical structure learning. The posterior convergence rate of the precision matrix is obtained. The posterior distribution on the model space is extremely cumbersome to compute. We propose a fast computational method for approximating the posterior probabilities of various graphs using the Laplace approximation approach by expanding the posterior density around the posterior mode, which is the graphical lasso by our choice of the prior distribution. We also provide estimates of the accuracy in the approximation.
RESEARCH ARTICLE Bayesian variable selection in additive partial linear models
"... Many studies in recent time include a large number of predictor variables, but typically only a few of the predictors have significant roles. Variable selection techniques have been developed using both nonBayesian and Bayesian approaches. Additive partial linear models (APLM) provide a flexible ye ..."
Abstract
 Add to MetaCart
(Show Context)
Many studies in recent time include a large number of predictor variables, but typically only a few of the predictors have significant roles. Variable selection techniques have been developed using both nonBayesian and Bayesian approaches. Additive partial linear models (APLM) provide a flexible yet manageable extension of linear models, where some variables can have nonlinear effects. We develop a Bayesian method for variable selection for APLM by expanding the nonlinear functions in a polynomial basis and introducing sparsity by allowing point masses in the prior distribution of regression coefficients. We address variable selection for both linear and nonlinear parts. The nonsingular part of the prior is given by a Laplace or multivariate Laplace density depending on whether the predictor has only a linear effect or a general effect. However, instead of using Markov Chain Monte Carlo methods, which are extremely slow in high dimensional models, we use Laplace approximation technique around posterior mode, which can be identified with the group lasso solution. We conduct a simulation study and present real data analysis for a nutritional epidemiology study and prostate cancer data.