Results 1  10
of
94
Sparse Reconstruction by Separable Approximation
, 2007
"... Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing ..."
Abstract

Cited by 373 (38 self)
 Add to MetaCart
Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing (CS) are a few wellknown areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (ℓ2) error term added to a sparsityinducing (usually ℓ1) regularizer. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex, sparsityinducing function. We propose iterative methods in which each step is an optimization subproblem involving a separable quadratic term (diagonal Hessian) plus the original sparsityinducing term. Our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. In addition to solving the standard ℓ2 − ℓ1 case, our approach handles other problems, e.g., ℓp regularizers with p � = 1, or groupseparable (GS) regularizers. Experiments with CS problems show that our approach provides stateoftheart speed for the standard ℓ2 − ℓ1 problem, and is also efficient on problems with GS regularizers. Index Terms — sparse approximation, compressed sensing, optimization, reconstruction.
Bregman iterative algorithms for ℓ1minimization with applications to compressed sensing
 SIAM J. IMAGING SCI
, 2008
"... We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 insta ..."
Abstract

Cited by 84 (15 self)
 Add to MetaCart
We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 instances of the unconstrained problem minu∈Rn μ‖u‖1 + 2 ‖Au−fk ‖ 2 2 for given matrix A and vector f k. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrixvector operations involving A and A ⊤ can be computed by fast transforms. Utilizing a fast fixedpoint continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.
Efficient schemes for total variation minimization under constraints in image processing
, 2007
"... ..."
(Show Context)
Phase unwrapping via graph cuts
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2007
"... Phase unwrapping is the inference of absolute phase from modulo2π phase. This paper introduces a new energy minimization framework for phase unwrapping. The considered objective functions are firstorder Markov random fields. We provide an exact energy minimization algorithm, whenever the correspo ..."
Abstract

Cited by 42 (9 self)
 Add to MetaCart
Phase unwrapping is the inference of absolute phase from modulo2π phase. This paper introduces a new energy minimization framework for phase unwrapping. The considered objective functions are firstorder Markov random fields. We provide an exact energy minimization algorithm, whenever the corresponding clique potentials are convex, namely for the phase unwrapping classical L p norm, with p ≥ 1. Its complexity is KT(n, 3n), where K is the length of the absolute phase domain measured in 2π units and T (n, m) is the complexity of a maxflow computation in a graph with n nodes and m edges. For nonconvex clique potentials, often used owing to their discontinuity preserving ability, we face an NPhard problem for which we devise an approximate solution. Both algorithms solve integer optimization problems, by computing a sequence of binary optimizations, each one solved by graph cut techniques. Accordingly, we name the two algorithms PUMA, for phase unwrapping maxflow/mincut. A set of experimental results illustrates the effectiveness of the proposed approach and its competitiveness in comparison with stateoftheart phase unwrapping algorithms.
Some firstorder algorithms for total variation based image restoration
, 2009
"... This paper deals with firstorder numerical schemes for image restoration. These schemes rely on a dualitybased algorithm proposed in 1979 by Bermùdez and Moreno. This is an old and forgotten algorithm that is revealed wider than recent schemes (such as the Chambolle projection algorithm) and able ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
This paper deals with firstorder numerical schemes for image restoration. These schemes rely on a dualitybased algorithm proposed in 1979 by Bermùdez and Moreno. This is an old and forgotten algorithm that is revealed wider than recent schemes (such as the Chambolle projection algorithm) and able to improve contemporary schemes. Total variation regularization and smoothed total variation regularization are investigated. Algorithms are presented for such regularizations in image restoration. We prove the convergence of all the proposed schemes. We illustrate our study with numerous numerical examples. We make some comparisons with a class of efficient algorithms (proved to be optimal among firstorder numerical schemes) recently introduced by Y. Nesterov.
Analysis and generalizations of the linearized Bregman method
 SIAM J. IMAGING SCI
, 2010
"... This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem whenever its smooth parameter α is greater than a certain value. The analysis is based on showing that the linearized Bregman algorithm is equivalent to gradient descent applied to a certain dual formulation. This result motivates generalizations of the algorithm enabling the use of gradientbased optimization techniques such as line search, Barzilai–Borwein, limited memory BFGS (LBFGS), nonlinear conjugate gradient, and Nesterov’s methods. In the numerical simulations, the two proposed implementations, one using Barzilai–Borwein steps with nonmonotone line search and the other using LBFGS, gave more accurate solutions in much shorter times than the basic implementation of the linearized Bregman method with a socalled kicking technique.
Parametric Maximum Flow Algorithms for Fast Total Variation Minimization
, 2007
"... This report studies the global minimization of discretized total variation (TV) energies with an L¹ or L² fidelity term using parametric maximum flow algorithms. The TVL² model [36], also known as the RudinOsherFatemi (ROF) model is suitable for restoring images contaminated by Gaussian noise, wh ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
(Show Context)
This report studies the global minimization of discretized total variation (TV) energies with an L¹ or L² fidelity term using parametric maximum flow algorithms. The TVL² model [36], also known as the RudinOsherFatemi (ROF) model is suitable for restoring images contaminated by Gaussian noise, while the TVL¹ model [2, 29, 7, 42] is able to remove impulsive noise from greyscale images, and perform multi scale decompositions of them. For largescale applications such as those in medical image (pre)processing, we propose here fast and memoryefficient algorithms, based on a parametric maximum flow algorithm [19] and the minimum st cut representation of TVbased energy functions [26, 17]. Preliminary numerical results on largescale twodimensional CT and threedimensional Brain MRI images that illustrate the effectiveness of our approaches are presented.
Efficient Minimization Method for a Generalized Total Variation Functional
, 2009
"... Replacing the ℓ² data fidelity term of the standard Total Variation (TV) functional with an ℓ¹ data fidelity term has been found to offer a number of theoretical and practical benefits. Efficient algorithms for minimizing this ℓ¹TV functional have only recently begun to be developed, the fastest of ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
Replacing the ℓ² data fidelity term of the standard Total Variation (TV) functional with an ℓ¹ data fidelity term has been found to offer a number of theoretical and practical benefits. Efficient algorithms for minimizing this ℓ¹TV functional have only recently begun to be developed, the fastest of which exploit graph representations, and are restricted to the denoising problem. We describe an alternative approach that minimizes a generalized TV functional, including both ℓ²TV and ℓ¹TV as special cases, and is capable of solving more general inverse problems than denoising (e.g. deconvolution). This algorithm is competitive with the graphbased methods in the denoising case, and is the fastest algorithm of which we are aware for general inverse problems involving a nontrivial forward linear operator.
Fast and exact solution of total variation models on the gpu
 In CVPR Workshop on Visual Computer Vision on GPUs
, 2008
"... This paper discusses fast and accurate methods to solve Total Variation (TV) models on the graphics processing unit (GPU). We review two prominent models incorporating TV regularization and present different algorithms to solve these models. We mainly concentrate on variational techniques, i.e. algo ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
(Show Context)
This paper discusses fast and accurate methods to solve Total Variation (TV) models on the graphics processing unit (GPU). We review two prominent models incorporating TV regularization and present different algorithms to solve these models. We mainly concentrate on variational techniques, i.e. algorithms which aim at solving the Euler Lagrange equations associated with the variational model. We then show that particularly these algorithms can be effectively accelerated by implementing them on parallel architectures such as GPUs. For comparison we chose a stateoftheart method based on discrete optimization techniques. We then present the results of a rigorous performance evaluation including 2D and 3D problems. As a main result we show that the our GPU based algorithms clearly outperform discrete optimization techniques in both speed and maximum problem size. 1.