Results 1 
7 of
7
1Convexity in source separation: Models, geometry, and algorithms
"... Source separation or demixing is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learnin ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
Source separation or demixing is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learning. Despite the recent progress in each of these applications, advances in highthroughput sensor technology place demixing algorithms under pressure to accommodate extremely highdimensional signals, separate an ever larger number of sources, and cope with more sophisticated signal and mixing models. These difficulties are exacerbated by the need for realtime action in automated decisionmaking systems. Recent advances in convex optimization provide a simple framework for efficiently solving numerous difficult demixing problems. This article provides an overview of the emerging field, explains the theory that governs the underlying procedures, and surveys algorithms that solve them efficiently. We aim to equip practitioners with a toolkit for constructing their own demixing algorithms that work, as well as concrete intuition for why they work. Fundamentals of demixing The most basic model for mixed signals is a superposition model, where we observe a mixed
A primaldual algorithmic framework for constrained convex minimization
, 2014
"... Abstract We present a primaldual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract We present a primaldual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primaldual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction methodofmultipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.
An inexact proximal pathfollowing algorithm for constrained convex minimization
, 2014
"... Many scientific and engineering applications feature nonsmooth convex minimization problems over convex sets. In this paper, we address an important instance of this broad class where we assume that the nonsmooth objective is equipped with a tractable proximity operator and that the convex constra ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Many scientific and engineering applications feature nonsmooth convex minimization problems over convex sets. In this paper, we address an important instance of this broad class where we assume that the nonsmooth objective is equipped with a tractable proximity operator and that the convex constraint set affords a selfconcordant barrier. We provide a new joint treatment of proximal and selfconcordant barrier concepts and illustrate that such problems can be efficiently solved, without the need of lifting the problem dimensions, as in disciplined convex optimization approach. We propose an inexact pathfollowing algorithmic framework and theoretically characterize the worstcase analytical complexity of this framework when the proximal subproblems are solved inexactly. To show the merits of our framework, we apply its instances to both synthetic and realworld applications, where it shows advantages over standard interior point methods. As a byproduct, we describe how our framework can obtain points on the Pareto frontier of regularized problems with selfconcordant objectives in a tuning free fashion.
An optimal firstorder primaldual gap reduction framework for constrained convex optimization
"... ..."
A PRECONDITIONED FORWARDBACKWARD APPROACH WITH APPLICATION TO LARGESCALE NONCONVEX SPECTRAL UNMIXING PROBLEMS
"... ABSTRACT Many inverse problems require to minimize a criterion being the sum of a non necessarily smooth function and a Lipschitz differentiable function. Such an optimization problem can be solved with the ForwardBackward algorithm which can be accelerated thanks to the use of variable metrics de ..."
Abstract
 Add to MetaCart
ABSTRACT Many inverse problems require to minimize a criterion being the sum of a non necessarily smooth function and a Lipschitz differentiable function. Such an optimization problem can be solved with the ForwardBackward algorithm which can be accelerated thanks to the use of variable metrics derived from the MajorizeMinimize principle. The convergence of this approach is guaranteed provided that the criterion satisfies some additional technical conditions. Combining this method with an alternating minimization strategy will be shown to allow us to address a broad class of optimization problems involving largesize signals. An application example to a nonconvex spectral unmixing problem will be presented.
SIAM J. IMAGING SCIENCES c © xxxx Society for Industrial and Applied Mathematics Vol. xx, pp. x x–x Signal Recovery and System Calibration from Multiple Compressive Poisson Measurements
"... Abstract. The measurement matrix employed in compressive sensing typically cannot be known precisely a priori, and must be estimated via calibration. One may take multiple compressive measurements, from which the measurement matrix and underlying signals may be estimated jointly. This is of interest ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The measurement matrix employed in compressive sensing typically cannot be known precisely a priori, and must be estimated via calibration. One may take multiple compressive measurements, from which the measurement matrix and underlying signals may be estimated jointly. This is of interest as well when the measurement matrix may change as a function of the details of what is measured. This problem has been considered recently for Gaussian measurement noise, and here we develop this idea with application to Poisson systems. A collaborative maximum likelihood algorithm and alternating proximal gradient algorithm are proposed, and associated theoretical performance guarantees are established based on newly derived concentrationofmeasure results. A Bayesian model is then introduced, to improve flexibility and generality. Connections between the maximum likelihood methods and the Bayesian model are developed, and example results are presented for a real compressive Xray imaging system.
Composite convex minimization involving selfconcordantlike cost functions
"... Abstract. The selfconcordantlike property of a smooth convex function is a new analytical structure that generalizes the selfconcordant notion. While a wide variety of important applications feature the selfconcordantlike property, this concept has heretofore remained unexploited in convex op ..."
Abstract
 Add to MetaCart
Abstract. The selfconcordantlike property of a smooth convex function is a new analytical structure that generalizes the selfconcordant notion. While a wide variety of important applications feature the selfconcordantlike property, this concept has heretofore remained unexploited in convex optimization. To this end, we develop a variable metric framework of minimizing the sum of a “simple ” convex function and a selfconcordantlike function. We introduce a new analytic stepsize selection procedure and prove that the basic gradient algorithm has improved convergence guarantees as compared to “fast ” algorithms that rely on the Lipschitz gradient property. Our numerical tests with realdata sets shows that the practice indeed follows the theory. 1