Results 1 -
7 of
7
1Convexity in source separation: Models, geometry, and algorithms
"... Source separation or demixing is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learnin ..."
Abstract
-
Cited by 7 (6 self)
- Add to MetaCart
Source separation or demixing is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learning. Despite the recent progress in each of these applications, advances in high-throughput sensor technology place demixing algorithms under pressure to accommodate extremely high-dimensional signals, separate an ever larger number of sources, and cope with more sophisticated signal and mixing models. These difficulties are exacerbated by the need for real-time action in automated decision-making systems. Recent advances in convex optimization provide a simple framework for efficiently solving numerous difficult demixing problems. This article provides an overview of the emerging field, explains the theory that governs the underlying procedures, and surveys algorithms that solve them efficiently. We aim to equip practitioners with a toolkit for constructing their own demixing algorithms that work, as well as concrete intuition for why they work. Fundamentals of demixing The most basic model for mixed signals is a superposition model, where we observe a mixed
A primal-dual algorithmic framework for constrained convex minimization
, 2014
"... Abstract We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
Abstract We present a primal-dual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primal-dual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction method-of-multipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.
An inexact proximal path-following algorithm for constrained convex minimization
, 2014
"... Many scientific and engineering applications feature nonsmooth convex minimization problems over convex sets. In this paper, we address an important instance of this broad class where we assume that the nonsmooth objective is equipped with a tractable proximity operator and that the convex constra ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
(Show Context)
Many scientific and engineering applications feature nonsmooth convex minimization problems over convex sets. In this paper, we address an important instance of this broad class where we assume that the nonsmooth objective is equipped with a tractable proximity operator and that the convex constraint set affords a self-concordant barrier. We provide a new joint treatment of proximal and self-concordant barrier concepts and illustrate that such problems can be efficiently solved, without the need of lifting the problem dimensions, as in disciplined convex optimization approach. We propose an inexact path-following algorithmic framework and theoretically charac-terize the worst-case analytical complexity of this framework when the proximal subproblems are solved inexactly. To show the merits of our framework, we apply its instances to both synthetic and real-world applications, where it shows advantages over standard interior point methods. As a by-product, we describe how our framework can obtain points on the Pareto frontier of regularized problems with self-concordant objectives in a tuning free fashion.
An optimal first-order primal-dual gap reduction framework for constrained convex optimization
"... ..."
A PRECONDITIONED FORWARD-BACKWARD APPROACH WITH APPLICATION TO LARGE-SCALE NONCONVEX SPECTRAL UNMIXING PROBLEMS
"... ABSTRACT Many inverse problems require to minimize a criterion being the sum of a non necessarily smooth function and a Lipschitz differentiable function. Such an optimization problem can be solved with the Forward-Backward algorithm which can be accelerated thanks to the use of variable metrics de ..."
Abstract
- Add to MetaCart
ABSTRACT Many inverse problems require to minimize a criterion being the sum of a non necessarily smooth function and a Lipschitz differentiable function. Such an optimization problem can be solved with the Forward-Backward algorithm which can be accelerated thanks to the use of variable metrics derived from the Majorize-Minimize principle. The convergence of this approach is guaranteed provided that the criterion satisfies some additional technical conditions. Combining this method with an alternating minimization strategy will be shown to allow us to address a broad class of optimization problems involving large-size signals. An application example to a nonconvex spectral unmixing problem will be presented.
SIAM J. IMAGING SCIENCES c © xxxx Society for Industrial and Applied Mathematics Vol. xx, pp. x x–x Signal Recovery and System Calibration from Multiple Compressive Poisson Measurements
"... Abstract. The measurement matrix employed in compressive sensing typically cannot be known precisely a priori, and must be estimated via calibration. One may take multiple compressive measurements, from which the measurement matrix and underlying signals may be estimated jointly. This is of interest ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. The measurement matrix employed in compressive sensing typically cannot be known precisely a priori, and must be estimated via calibration. One may take multiple compressive measurements, from which the measurement matrix and underlying signals may be estimated jointly. This is of interest as well when the measurement matrix may change as a function of the details of what is measured. This problem has been considered recently for Gaussian measurement noise, and here we develop this idea with application to Poisson systems. A collaborative maximum likelihood algorithm and alternating proximal gradient algorithm are proposed, and associated theoretical performance guarantees are established based on newly derived concentration-of-measure results. A Bayesian model is then introduced, to improve flexibility and generality. Connections between the maximum likelihood methods and the Bayesian model are developed, and example results are presented for a real compressive X-ray imaging system.
Composite convex minimization involving self-concordant-like cost functions
"... Abstract. The self-concordant-like property of a smooth convex func-tion is a new analytical structure that generalizes the self-concordant notion. While a wide variety of important applications feature the self-concordant-like property, this concept has heretofore remained unex-ploited in convex op ..."
Abstract
- Add to MetaCart
Abstract. The self-concordant-like property of a smooth convex func-tion is a new analytical structure that generalizes the self-concordant notion. While a wide variety of important applications feature the self-concordant-like property, this concept has heretofore remained unex-ploited in convex optimization. To this end, we develop a variable metric framework of minimizing the sum of a “simple ” convex function and a self-concordant-like function. We introduce a new analytic step-size selec-tion procedure and prove that the basic gradient algorithm has improved convergence guarantees as compared to “fast ” algorithms that rely on the Lipschitz gradient property. Our numerical tests with real-data sets shows that the practice indeed follows the theory. 1