Results 1 
5 of
5
RegularizationFree Estimation in Trace Regression with Symmetric Positive Semidefinite Matrices
"... Trace regression models have received considerable attention in the context of matrix completion, quantum state tomography, and compressed sensing. Estimation of the underlying matrix from regularizationbased approaches promoting lowrankedness, notably nuclear norm regularization, have enjoyed gr ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Trace regression models have received considerable attention in the context of matrix completion, quantum state tomography, and compressed sensing. Estimation of the underlying matrix from regularizationbased approaches promoting lowrankedness, notably nuclear norm regularization, have enjoyed great popularity. In this paper, we argue that such regularization may no longer be necessary if the underlying matrix is symmetric positive semidefinite (spd) and the design satisfies certain conditions. In this situation, simple least squares estimation subject to an spd constraint may perform as well as regularizationbased approaches with a proper choice of regularization parameter, which entails knowledge of the noise level and/or tuning. By contrast, constrained least squares estimation comes without any tuning parameter and may hence be preferred due to its simplicity. 1
METHODOLOGY ARTICLE Open Access
"... Isotope pattern deconvolution for peptide mass spectrometry by nonnegative least squares/least absolute deviation template matching ..."
Abstract
 Add to MetaCart
(Show Context)
Isotope pattern deconvolution for peptide mass spectrometry by nonnegative least squares/least absolute deviation template matching
Estimation of positive definite Mmatrices and structure learning for attractive Gaussian Markov Random fields
"... ar ..."
(Show Context)
Lasso and equivalent quadratic penalized regression models
, 2013
"... The least absolute shrinkage and selection operator (lasso) and ridge regression produce usually different estimates although input, loss function and parameterization of the penalty are identical. In this paper we look for ridge and lasso models with identical solution set. It turns out, that the ..."
Abstract
 Add to MetaCart
(Show Context)
The least absolute shrinkage and selection operator (lasso) and ridge regression produce usually different estimates although input, loss function and parameterization of the penalty are identical. In this paper we look for ridge and lasso models with identical solution set. It turns out, that the lasso model with shrink vector λ and a quadratic penalized model with shrink matrix as outer product of λ with itself are equivalent, in the sense that they have equal solutions. To achieve this, we have to restrict the estimates to be positive. This doesn’t limit the area of application since we can decompose every estimate in a positive and negative part. The resulting problem can be solved with a non negative least square algorithm and may benefit from algorithms with high numerically accuracy. This model can also deal with mixtures of ridge and lasso penalties like the elastic net, leading to a continuous solution path as a function of the mixture proportions. Beside this quadratic penalized model, an augmented regression model with positive bounded estimates is developed which is also equivalent to the lasso model, but is probably faster to solve. 1