Results 1 -
5 of
5
Learning a Product of Experts with Elitist Lasso
"... Discriminative models such as logistic regression profit from the ability to incorporate arbitrary rich features; however, complex dependencies among overlapping features can often result in weight undertraining. One popular method that attempts to mitigate this problem is logarithmic opinion pools ..."
Abstract
- Add to MetaCart
the weighting between them — through the use of a mixed ℓ2ℓ1 norm as previously seen in elitist lasso. Unlike its more popular sibling ℓ1ℓ2 norm (used in group lasso), which seeks feature sparsity at the group-level, ℓ2ℓ1 norm encourages sparsity within feature groups. We demonstrate how this property can
Social sparsity! neighborhood systems enrich structured shrinkage operators
- IEEE Trans. Signal Processing
, 2013
"... Abstract—Sparse and structured signal expansions on dictionaries can be obtained through explicit modeling in the coefficient domain. The originality of the present article lies in the construction and the study of generalized shrinkage operators, whose goal is to identify structured significance ma ..."
Abstract
-
Cited by 8 (2 self)
- Add to MetaCart
maps and give rise to structured thresholding. These generalize Group Lasso and the previously introduced Elitist Lasso by introducing more flexibility in the coefficient domain modeling, and lead to the notion of social sparsity. The proposed operators are studied theoretically and embedded
Author manuscript, published in "SPARS'09- Signal Processing with Adaptive Sparse Structured Representations (2009)" Structured Sparsity: from Mixed Norms to Structured Shrinkage
, 2009
"... Abstract—Sparse and structured signal expansions on dictionaries can be obtained through explicit modeling in the coefficient domain. The originality of the present contribution lies in the construction and the study of generalized shrinkage operators, whose goal is to identify structured significan ..."
Abstract
- Add to MetaCart
significance maps. These generalize Group LASSO and the previously introduced Elitist LASSO by introducing more flexibility in the coefficient domain modeling. We study experimentally the performances of corresponding shrinkage operators in terms of significance map estimation in the orthogonal basis case. We
A Finite Newton Algorithm for Non-degenerate Piecewise Linear Systems
"... We investigate Newton-type optimization methods for solving piecewise linear systems (PLS) with non-degenerate coefficient matrix. Such systems arise, for example, from the numerical solution of linear complementarity problem which is useful to model several learning and optimization problems. In th ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
to be at least linear before termination. We emphasize the applications of our method to modeling, from a novel perspective of PLS, several statistical learning problems such as elitist Lasso, non-negative least squares and support vector machines. Numerical results on synthetic and benchmark data sets
1Non-degenerate Piecewise Linear Systems: A Finite Newton Algorithm and Applications in Machine Learning
"... We investigate Newton-type optimization methods for solving piecewise linear systems (PLSs) with non-degenerate coefficient matrix. Such systems arise, for example, from the numerical solution of linear complementarity problem which is useful to model sev-eral learning and optimization problems. In ..."
Abstract
- Add to MetaCart
of convergence is shown to be at least linear before termination. We emphasize the applications of our method in modeling, from a novel perspective of PLSs, some sta-tistical learning problems such as box constrained least squares, elitist Lasso (Kowalski & Torreesani, 2008) and support vector machines