Results 1 -
8 of
8
Robust Subspace Clustering
, 2013
"... Subspace clustering refers to the task of finding a multi-subspace representation that best fits a collection of points taken from a high-dimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [17] to cluster noisy data, and develops some novel theory demo ..."
Abstract
-
Cited by 22 (1 self)
- Add to MetaCart
(Show Context)
Subspace clustering refers to the task of finding a multi-subspace representation that best fits a collection of points taken from a high-dimensional space. This paper introduces an algorithm inspired by sparse subspace clustering (SSC) [17] to cluster noisy data, and develops some novel theory demonstrating its correctness. In particular, the theory uses ideas from geometric functional analysis to show that the algorithm can accurately recover the underlying subspaces under minimal requirements on their orientation, and on the number of samples per subspace. Synthetic as well as real data experiments complement our theoretical study, illustrating our approach and demonstrating its effectiveness.
Intersecting singularities for multi-structured estimation
"... We address the problem of designing a convex nonsmooth regularizer encouraging multiple structural effects simultaneously. Focusing on the inference of sparse and low-rank matrices we suggest a new complexity index and a convex penalty approximating it. The new penalty term can be written as the tra ..."
Abstract
-
Cited by 6 (1 self)
- Add to MetaCart
We address the problem of designing a convex nonsmooth regularizer encouraging multiple structural effects simultaneously. Focusing on the inference of sparse and low-rank matrices we suggest a new complexity index and a convex penalty approximating it. The new penalty term can be written as the trace norm of a linear function of the matrix. By analyzing theoretical properties of this family of regularizers we come up with oracle inequalities and compressed sensing results ensuring the quality of our regularized estimator. We also provide algorithms and supporting numerical experiments. 1.
Learning heteroscedastic models by convex programming under group sparsity
- Proc. of the International conference on Machine Learning
, 2013
"... Popular sparse estimation methods based on ℓ1-relaxation, such as the Lasso and the Dantzig selector, require the knowledge of the variance of the noise in order to properly tune the regularization parameter. This constitutes a major obstacle in applying these methods in several frameworks—such as t ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Popular sparse estimation methods based on ℓ1-relaxation, such as the Lasso and the Dantzig selector, require the knowledge of the variance of the noise in order to properly tune the regularization parameter. This constitutes a major obstacle in applying these methods in several frameworks—such as time series, random fields, inverse problems—for which the noise is rarely homoscedastic and its level is hard to know in advance. In this paper, we propose a new approach to the joint estimation of the conditional mean and the conditional variance in a highdimensional (auto-) regression setting. An attractive feature of the proposed estimator is that it is efficiently computable even for very large scale problems by solving a secondorder cone program (SOCP). We present theoretical analysis and numerical results assessing the performance of the proposed procedure. 1.
Clustering Consistent Sparse Subspace Clustering
, 2015
"... Subspace clustering is the problem of clustering data points into a union of low-dimensional linear/affine subspaces. It is the mathematical abstraction of many important problems in computer vision, image pro-cessing and has been drawing avid attention in machine learning and statistics recently. I ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Subspace clustering is the problem of clustering data points into a union of low-dimensional linear/affine subspaces. It is the mathematical abstraction of many important problems in computer vision, image pro-cessing and has been drawing avid attention in machine learning and statistics recently. In particular, a line
Graph Connectivity in Noisy Sparse Subspace Clustering
"... Abstract Subspace clustering is the problem of clustering data points into a union of lowdimensional linear/affine subspaces. It is the mathematical abstraction of many important problems in computer vision, image processing and machine learning. A line of recent work ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract Subspace clustering is the problem of clustering data points into a union of lowdimensional linear/affine subspaces. It is the mathematical abstraction of many important problems in computer vision, image processing and machine learning. A line of recent work
A Service of zbw A lava attack on the recovery of sums of dense and sparse signals A LAVA ATTACK ON THE RECOVERY OF SUMS OF DENSE AND SPARSE SIGNALS
"... We consider a generalization of these two basic models, termed here a "sparse+dense" model, in which the signal is given by the sum of a sparse signal and a dense signal. Such a structure poses problems for traditional sparse estimators, such as the lasso, and for traditional dense estima ..."
Abstract
- Add to MetaCart
(Show Context)
We consider a generalization of these two basic models, termed here a "sparse+dense" model, in which the signal is given by the sum of a sparse signal and a dense signal. Such a structure poses problems for traditional sparse estimators, such as the lasso, and for traditional dense estimation methods, such as ridge estimation. We propose a new penalization-based method, called lava, which is computationally efficient. With suitable choices of penalty parameters, the proposed method strictly dominates both lasso and ridge. We derive analytic expressions for the finite-sample risk function of the lava estimator in the Gaussian sequence model. We also provide a deviation bound for the prediction risk in the Gaussian regression model with fixed design. In both cases, we provide Stein's unbiased estimator for lava's prediction risk. A simulation example compares the performance of lava to lasso, ridge, and elastic net in a regression example using feasible, data-dependent penalty parameters and illustrates lava's improved performance relative to these benchmarks.
page_id=43" Learning Heteroscedastic Models by Convex Programming under Group Sparsity
, 2013
"... Popular sparse estimation methods based on ℓ1-relaxation, such as the Lasso and the Dantzig selector, require the knowledge of the variance of the noise in order to properly tune the regularization parameter. This constitutesamajorobstacleinapplyingthese methods in severalframeworks—suchas time seri ..."
Abstract
- Add to MetaCart
(Show Context)
Popular sparse estimation methods based on ℓ1-relaxation, such as the Lasso and the Dantzig selector, require the knowledge of the variance of the noise in order to properly tune the regularization parameter. This constitutesamajorobstacleinapplyingthese methods in severalframeworks—suchas time series, random fields, inverse problems—for which the noise is rarely homoscedastic and its level is hard to know in advance. In this paper, we propose a new approach to the joint estimation of the conditional mean and the conditional variance in a highdimensional (auto-) regression setting. An attractive feature of the proposed estimator is that it is efficiently computable even for verylargescaleproblemsbysolvingasecondordercone program(SOCP). We present theoreticalanalysisand numericalresults assessing the performance of the proposed procedure. 1.