Results 11  20
of
28
Signal recovery in unions of subspaces with applications to compressive imaging
, 2012
"... In applications ranging from communications to genetics, signals can be modeled as lying in a union of subspaces. Under this model, signal coefficients that lie in certain subspaces are active or inactive together. The potential subspaces are known in advance, but the particular set of subspaces tha ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In applications ranging from communications to genetics, signals can be modeled as lying in a union of subspaces. Under this model, signal coefficients that lie in certain subspaces are active or inactive together. The potential subspaces are known in advance, but the particular set of subspaces that are active (i.e., in the signal support) must be learned from measurements. We show that exploiting knowledge of subspaces can further reduce the number of measurements required for exact signal recovery, and derive universal bounds for the number of measurements needed. The bound is universal in the sense that it only depends on the number of subspaces under consideration, and their orientation relative to each other. The particulars of the subspaces (e.g., compositions, dimensions, extents, overlaps, etc.) does not affect the results we obtain. In the process, we derive sample complexity bounds for the special case of the group lasso with overlapping groups (the latent group lasso), which is used in a variety of applications. Finally, we also show that wavelet transform coe cients of images can be modeled as lying in groups, and hence can be efficiently recovered using group lasso methods.
Sparse overlapping sets lasso for multitask learning and fmri data analysis
 Neural Information Processing Systems
, 2013
"... Multitask learning can be effective when features useful in one task are also useful for other tasks, and the group lasso is a standard method for selecting a common subset of features. In this paper, we are interested in a less restrictive form of multitask learning, wherein (1) the available feat ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Multitask learning can be effective when features useful in one task are also useful for other tasks, and the group lasso is a standard method for selecting a common subset of features. In this paper, we are interested in a less restrictive form of multitask learning, wherein (1) the available features can be organized into subsets according to a notion of similarity and (2) features useful in one task are similar, but not necessarily identical, to the features best suited for other tasks. The main contribution of this paper is a new procedure called Sparse Overlapping Sets (SOS) lasso, a convex optimization that automatically selects similar features for related learning tasks. Error bounds are derived for SOSlasso and its consistency is established for squared error loss. In particular, SOSlasso is motivated by multisubject fMRI studies in which functional activity is classified using brain voxels as features. Experiments with real and synthetic data demonstrate the advantages of SOSlasso compared to the lasso and group lasso. 1
On the Fundamental Limits of Recovering Tree Sparse Vectors from Noisy Linear Measurements
, 2013
"... ..."
A Convex Exemplarbased Approach to MADBayes Dirichlet Process Mixture Models
"... MADBayes (MAPbased Asymptotic Derivations) has been recently proposed as a general technique to derive scalable algorithm for Bayesian Nonparametric models. However, the combinatorial nature of objective functions derived from MADBayes results in hard optimization problem, for which current p ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
MADBayes (MAPbased Asymptotic Derivations) has been recently proposed as a general technique to derive scalable algorithm for Bayesian Nonparametric models. However, the combinatorial nature of objective functions derived from MADBayes results in hard optimization problem, for which current practice employs heuristic algorithms analogous to kmeans to find local minimum. In this paper, we consider the exemplarbased version of MADBayes formulation for DP and Hierarchical DP (HDP) mixture model. We show that an exemplarbased MADBayes formulation can be relaxed to a convex structuralregularized program that, under clusterseparation conditions, shares the same optimal solution to its combinatorial counterpart. An algorithm based on Alternating Direction Method of Multiplier (ADMM) is then proposed to solve such program. In our experiments on several benchmark data sets, the proposed method finds optimal solution of the combinatorial problem and significantly improves existing methods in terms of the exemplarbased objective. 1.
Principal component gene set enrichment (PCGSE). ArXiv eprints. arXiv:1403.5148
, 2014
"... ar ..."
(Show Context)
Regularization for Design
, 2014
"... An algorithmic bridge is starting to be established between sparse reconstruction theory and distributed control theory. For example, `1regularization has been suggested as an appropriate means for codesigning sparse feedback gains and consensus topologies subject to performance bounds. In recent ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
An algorithmic bridge is starting to be established between sparse reconstruction theory and distributed control theory. For example, `1regularization has been suggested as an appropriate means for codesigning sparse feedback gains and consensus topologies subject to performance bounds. In recent work, we showed that ideas from atomic norm minimization could be used to simultaneously codesign a distributed optimal controller and the communication delay structure on which it is to be implemented. While promising and successful, these results lack the same theoretical support that their sparse reconstruction counterparts enjoy – as things stand, these methods are at best viewed as principled heuristics. In this paper, we describe theoretical connections between sparse reconstruction and systems design by developing approximation bounds for control codesign problems via convex optimization. We also give a concrete example of a design problem for which our approach provides approximation guarantees.
COMPRESSED SENSING FOR BLOCKSPARSE SMOOTH SIGNALS
"... We present reconstruction algorithms for smooth signals with block sparsity from their compressed measurements. We tackle the issue of varying group size via the groupsparse least absolute shrinkage selection operator (LASSO) as well as via latent group LASSO regularizations. We achieve smoothness ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We present reconstruction algorithms for smooth signals with block sparsity from their compressed measurements. We tackle the issue of varying group size via the groupsparse least absolute shrinkage selection operator (LASSO) as well as via latent group LASSO regularizations. We achieve smoothness in the signal via fusion. We develop lowcomplexity solvers for our proposed formulations through the alternating direction method of multipliers. Index Terms — Compressed sensing, block sparsity, smoothness, signal reconstruction
E MINES ParisTech
"... pour obtenir le grade de docteur délivré par l’École nationale supérieure des mines de Paris Spécialité « Mathématique et Automatique » présentée et soutenue publiquement par Philippe MOULIN ..."
Abstract
 Add to MetaCart
(Show Context)
pour obtenir le grade de docteur délivré par l’École nationale supérieure des mines de Paris Spécialité « Mathématique et Automatique » présentée et soutenue publiquement par Philippe MOULIN
A DualAugmented Block Minimization Framework for Learning with Limited Memory
"... In past few years, several techniques have been proposed for training of linear Support Vector Machine (SVM) in limitedmemory setting, where a dual blockcoordinate descent (dualBCD) method was used to balance cost spent on I/O and computation. In this paper, we consider the more general setting o ..."
Abstract
 Add to MetaCart
(Show Context)
In past few years, several techniques have been proposed for training of linear Support Vector Machine (SVM) in limitedmemory setting, where a dual blockcoordinate descent (dualBCD) method was used to balance cost spent on I/O and computation. In this paper, we consider the more general setting of regularized Empirical Risk Minimization (ERM) when data cannot fit into memory. In particular, we generalize the existing block minimization framework based on strong duality and Augmented Lagrangian technique to achieve global convergence for general convex ERM. The block minimization framework is flexible in the sense that, given a solver working under sufficient memory, one can integrate it with the framework to obtain a solver globally convergent under limitedmemory condition. We conduct experiments on L1regularized classification and regression problems to corroborate our convergence theory and compare the proposed framework to algorithms adopted from online and distributed settings, which shows superiority of the proposed approach on data of size ten times larger than the memory capacity. 1