Results 1  10
of
92
Sparse Reconstruction by Separable Approximation
, 2008
"... Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing ( ..."
Abstract

Cited by 373 (36 self)
 Add to MetaCart
Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution and reconstruction, and compressed sensing (CS) are a few wellknown areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (ℓ2) error term added to a sparsityinducing (usually ℓ1) regularization term. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (which is therefore separable in the unknowns) plus the original sparsityinducing regularizer. Our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. In addition to solving the standard ℓ2 − ℓ1 case, our framework yields an efficient solution technique for other regularizers, such as an ℓ∞norm regularizer and groupseparable (GS) regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard ℓ2 − ℓ1 problem, as well as being efficient on problems with other separable regularization terms.
Bregman iterative algorithms for ℓ1minimization with applications to compressed sensing
 SIAM J. Imaging Sci
, 2008
"... Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number o ..."
Abstract

Cited by 86 (16 self)
 Add to MetaCart
Abstract. We propose simple and extremely efficient methods for solving the basis pursuit problem min{‖u‖1: Au = f,u ∈ R n}, which is used in compressed sensing. Our methods are based on Bregman iterative regularization, and they give a very accurate solution after solving only a very small number of 1 instances of the unconstrained problem minu∈Rn μ‖u‖1 + 2 ‖Au−fk ‖ 2 2 for given matrix A and vector f k. We show analytically that this iterative approach yields exact solutions in a finite number of steps and present numerical results that demonstrate that as few as two to six iterations are sufficient in most cases. Our approach is especially useful for many compressed sensing applications where matrixvector operations involving A and A ⊤ can be computed by fast transforms. Utilizing a fast fixedpoint continuation solver that is based solely on such operations for solving the above unconstrained subproblem, we were able to quickly solve huge instances of compressed sensing problems on a standard PC.
Efficient schemes for total variation minimization under constraints in image processing
, 2007
"... ..."
(Show Context)
Phase unwrapping via graph cuts
 IEEE Transactions on Image Processing
, 2007
"... Abstract — Phase unwrapping is the inference of absolute phase from modulo2π phase. This paper introduces a new energy minimization framework for phase unwrapping. The considered objective functions are firstorder Markov random fields. We provide an exact energy minimization algorithm, whenever th ..."
Abstract

Cited by 42 (9 self)
 Add to MetaCart
Abstract — Phase unwrapping is the inference of absolute phase from modulo2π phase. This paper introduces a new energy minimization framework for phase unwrapping. The considered objective functions are firstorder Markov random fields. We provide an exact energy minimization algorithm, whenever the corresponding clique potentials are convex, namely for the phase unwrapping classical L p norm, with p ≥ 1. Its complexity is KT(n, 3n), where K is the length of the absolute phase domain measured in 2π units and T (n, m) is the complexity of a maxflow computation in a graph with n nodes and m edges. For nonconvex clique potentials, often used owing to their discontinuity preserving ability, we face an NPhard problem for which we devise an approximate solution. Both algorithms solve integer optimization problems, by computing a sequence of binary optimizations, each one solved by graph cut techniques. Accordingly, we name the two algorithms PUMA, for phase unwrapping maxflow/mincut. A set of experimental results illustrates the effectiveness of the proposed approach and its competitiveness in comparison with stateoftheart phase unwrapping algorithms. Index Terms — Phase unwrapping, energy minimization, integer optimization, submodularity, graph cuts, image
2010 Analysis and generalizations of the linearized Bregman method
 SIAM J. Imaging Sci
"... Abstract. This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit ..."
Abstract

Cited by 39 (10 self)
 Add to MetaCart
Abstract. This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem whenever its smooth parameter α is greater than a certain value. The analysis is based on showing that the linearized Bregman algorithm is equivalent to gradient descent applied to a certain dual formulation. This result motivates generalizations of the algorithm enabling the use of gradientbased optimization techniques such as line search, Barzilai–Borwein, limited memory BFGS (LBFGS), nonlinear conjugate gradient, and Nesterov’s methods. In the numerical simulations, the two proposed implementations, one using Barzilai–Borwein steps with nonmonotone line search and the other using LBFGS, gave more accurate solutions in much shorter times than the basic implementation of the linearized Bregman method with a socalled kicking technique. Key words. Bregman, linearized Bregman, compressed sensing, ℓ1minimization, basis pursuit
Parametric Maximum Flow Algorithms for Fast Total Variation Minimization
, 2007
"... This report studies the global minimization of discretized total variation (TV) energies with an L¹ or L² fidelity term using parametric maximum flow algorithms. The TVL² model [36], also known as the RudinOsherFatemi (ROF) model is suitable for restoring images contaminated by Gaussian noise, wh ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
(Show Context)
This report studies the global minimization of discretized total variation (TV) energies with an L¹ or L² fidelity term using parametric maximum flow algorithms. The TVL² model [36], also known as the RudinOsherFatemi (ROF) model is suitable for restoring images contaminated by Gaussian noise, while the TVL¹ model [2, 29, 7, 42] is able to remove impulsive noise from greyscale images, and perform multi scale decompositions of them. For largescale applications such as those in medical image (pre)processing, we propose here fast and memoryefficient algorithms, based on a parametric maximum flow algorithm [19] and the minimum st cut representation of TVbased energy functions [26, 17]. Preliminary numerical results on largescale twodimensional CT and threedimensional Brain MRI images that illustrate the effectiveness of our approaches are presented.
Efficient Minimization Method for a Generalized Total Variation Functional
, 2009
"... Replacing the ℓ² data fidelity term of the standard Total Variation (TV) functional with an ℓ¹ data fidelity term has been found to offer a number of theoretical and practical benefits. Efficient algorithms for minimizing this ℓ¹TV functional have only recently begun to be developed, the fastest of ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
Replacing the ℓ² data fidelity term of the standard Total Variation (TV) functional with an ℓ¹ data fidelity term has been found to offer a number of theoretical and practical benefits. Efficient algorithms for minimizing this ℓ¹TV functional have only recently begun to be developed, the fastest of which exploit graph representations, and are restricted to the denoising problem. We describe an alternative approach that minimizes a generalized TV functional, including both ℓ²TV and ℓ¹TV as special cases, and is capable of solving more general inverse problems than denoising (e.g. deconvolution). This algorithm is competitive with the graphbased methods in the denoising case, and is the fastest algorithm of which we are aware for general inverse problems involving a nontrivial forward linear operator.
Fast and exact solution of total variation models on the gpu
 In CVPR Workshop on Visual Computer Vision on GPUs
, 2008
"... This paper discusses fast and accurate methods to solve Total Variation (TV) models on the graphics processing unit (GPU). We review two prominent models incorporating TV regularization and present different algorithms to solve these models. We mainly concentrate on variational techniques, i.e. algo ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
(Show Context)
This paper discusses fast and accurate methods to solve Total Variation (TV) models on the graphics processing unit (GPU). We review two prominent models incorporating TV regularization and present different algorithms to solve these models. We mainly concentrate on variational techniques, i.e. algorithms which aim at solving the Euler Lagrange equations associated with the variational model. We then show that particularly these algorithms can be effectively accelerated by implementing them on parallel architectures such as GPUs. For comparison we chose a stateoftheart method based on discrete optimization techniques. We then present the results of a rigorous performance evaluation including 2D and 3D problems. As a main result we show that the our GPU based algorithms clearly outperform discrete optimization techniques in both speed and maximum problem size. 1.
Nonlocal Unsupervised Variational Image Segmentation Models
, 2008
"... New image denoising models based on nonlocal image information have been recently introduced in the literature. These socalled ”nonlocal” denoising models provide excellent results because these models can denoise smooth regions or/and textured regions simultaneously, unlike standard denoising mo ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
(Show Context)
New image denoising models based on nonlocal image information have been recently introduced in the literature. These socalled ”nonlocal” denoising models provide excellent results because these models can denoise smooth regions or/and textured regions simultaneously, unlike standard denoising models. Standard variational models s.a. Total Variationbased models are defined to work in a small local neighborhood, which is enough to denoise smooth regions. However, textures are not local in nature and requires semilocal/nonlocal information to be denoised efficiently. Several papers have introduced nonlocal filters and nonlocal variational models for image denoising. Yet, few studies have been done to develop unsupervised image segmentation models based on nonlocal information. This will be the goal of this paper. We define and study three unsupervised nonlocal segmentation models. These models will be based on the continuous global minimization approach for image segmentation recently introduced in [10, 6]. The energy of [10, 6] is a first order energy composed of the weighted Total Variation norm and a linear term. The first proposed nonlocal segmentation model will extend