Results 11  20
of
974
The Bayesian Structural EM Algorithm
, 1998
"... In recent years there has been a flurry of works on learning Bayesian networks from data. One of the hard problems in this area is how to effectively learn the structure of a belief network from incomplete datathat is, in the presence of missing values or hidden variables. In a recent paper, I in ..."
Abstract

Cited by 260 (13 self)
 Add to MetaCart
introduced an algorithm called Structural EM that combines the standard Expectation Maximization (EM) algorithm, which optimizes parameters, with structure search for model selection. That algorithm learns networks based on penalized likelihood scores, which include the BIC/MDL score and various
PENALIZATION OF DIRICHLET OPTIMAL CONTROL PROBLEMS ∗
"... Abstract. We apply Robin penalization to Dirichlet optimal control problems governed by semilinear elliptic equations. Error estimates in terms of the penalization parameter are stated. Error estimates for the numerical approximation of both Dirichlet and Robin optimal control problems are provided ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract. We apply Robin penalization to Dirichlet optimal control problems governed by semilinear elliptic equations. Error estimates in terms of the penalization parameter are stated. Error estimates for the numerical approximation of both Dirichlet and Robin optimal control problems are provided
Penalized modelbased clustering with application to variable selection
 Journal of Machine Learning Research
, 2007
"... Variable selection in clustering analysis is both challenging and important. In the context of modelbased clustering analysis with a common diagonal covariance matrix, which is especially suitable for “high dimension, low sample size ” settings, we propose a penalized likelihood approach with an L1 ..."
Abstract

Cited by 45 (4 self)
 Add to MetaCart
penalty function, automatically realizing variable selection via thresholding and delivering a sparse solution. We derive an EM algorithm to fit our proposed model, and propose a modified BIC as a model selection criterion to choose the number of components and the penalization parameter. A simulation
Onestep sparse estimates in nonconcave penalized likelihood models
 ANN. STATIST.
, 2008
"... Fan and Li propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties, but maximizing the penalized likelihood function is computationally challenging, because the objective funct ..."
Abstract

Cited by 133 (6 self)
 Add to MetaCart
Fan and Li propose a family of variable selection methods via penalized likelihood using concave penalty functions. The nonconcave penalized likelihood estimators enjoy the oracle properties, but maximizing the penalized likelihood function is computationally challenging, because the objective
On the asymptotics of penalized splines
, 2007
"... The asymptotic behaviour of penalized spline estimators is studied in the univariate case. We use Bsplines and a penalty is placed on mthorder differences of the coefficients. The number of knots is assumed to converge to infinity as the sample size increases. We show that penalized splines behav ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
simple expressions for the asymptotic mean and variance. Provided that it is fast enough, the rate at which the number of knots converges to infinity does not affect the asymptotic distribution. The optimal rate of convergence of the penalty parameter is given. Penalized splines are not designadaptive.
Proxpenalization and splitting methods for constrained variational problems
 SIAM J. Optim
"... Abstract. This paper is concerned with the study of a class of proxpenalization methods for solving variational inequalities of the form Ax+NC(x) 3 0 where H is a real Hilbert space, A: H ⇒ H is a maximal monotone operator and NC is the outward normal cone to a closed convex set C ⊂ H. Given Ψ: H → ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
→ R ∪ {+∞} which acts as a penalization function with respect to the constraint x ∈ C, and a penalization parameter βn, we consider a diagonal proximal algorithm of the form xn = I + λn(A+ βn∂Ψ)
Multiscale likelihood analysis and complexity penalized estimation
 Annals of Statistics
"... We describe here a framework for a certain class of multiscale likelihood factorizations wherein, in analogy to a wavelet decomposition of an L 2 function, a given likelihood function has an alternative representation as a product of conditional densities reflecting information in both the data and ..."
Abstract

Cited by 72 (18 self)
 Add to MetaCart
and the parameter vector localized in position and scale. The framework is developed as a set of sufficient conditions for the existence of such factorizations, formulated in analogy to those underlying a standard multiresolution analysis for wavelets, and hence can be viewed as a multiresolution analysis
THE PENALIZED PROFILE SAMPLER
 SUBMITTED TO THE ANNALS OF STATISTICS
, 2007
"... The penalized profile sampler for semiparametric inference is an extension of the profile sampler method [8] obtained by profiling a penalized loglikelihood. The idea is to base inference on the posterior distribution obtained by multiplying a profiled penalized loglikelihood by a prior for the par ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
for the parametric component, where the profiling and penalization are applied to the nuisance parameter. Because the prior is not applied to the full likelihood, the method is not strictly Bayesian. A benefit of this approximately Bayesian method is that it circumvents the need to put a prior on the possibly
Smoothing parameter selection in two frameworks for penalized splines
 J. R. Statist. Soc. B
, 2013
"... There are two popular smoothing parameter selection methods for spline smoothing. First, smoothing parameters can be estimated minimizing criteria that approximate the average mean squared error of the regression function estimator. Second, the maximum likelihood paradigm can be employed, under the ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
the assumption that the regression function is a realization of some stochastic process. In this article the asymptotic properties of both smoothing parameter estimators for penalized splines are studied and compared. A simulation study and a real data example illustrate the theoretical findings.
Penalized probabilistic clustering
 In Advances in Neural Information Processing Systems 17
, 2005
"... While clustering is usually an unsupervised operation, there are circumstances in which we believe (with varying degrees of certainty) that items A and B should be assigned to the same cluster, while items A and C should not. We would like such pairwise relations to influence cluster assignments of ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
to clusters. This prior penalizes cluster assignments according to the degree with which they violate the preferences. We fit the model parameters with EM. Experiments on a variety of data sets show that PPC can consistently improve clustering results. 1
Results 11  20
of
974