#### DMCA

## Linear convergence of iterative soft-thresholding

Venue: | J. Fourier Anal. Appl |

Citations: | 52 - 13 self |

### Citations

1284 | Least angle regression
- Efron, Hastie, et al.
- 2004
(Show Context)
Citation Context ... of the above (non-smooth) minimization problem is not straightforward. There is a vast amount of literature dealing with efficient computational algorithms for equivalent formulations of the problem =-=[8, 12, 14, 16, 21, 22, 27, 33]-=-, both in the infinite-dimensional setting as well as for finitely many dimensions, but mostly for the finitedimensional case. An often-used, simple but apparently slow algorithm is the iterative soft... |

731 | An iterative thresholding algorithm for linear inverse problems with a sparsity constraint
- Daubechies, Defrise, et al.
- 2004
(Show Context)
Citation Context ...stead of considering the linear equation, a regularized problem is posed for which the solution is stable with respect to noise. A common approach is to regularize by minimizing a Tikhonov functional =-=[7, 15, 28]-=-. A special class of these regularizations has been of recent interest, namely of the type min u∈ℓ 2 ‖Ku − f‖2 ∞∑ + αk|uk| . (1.1) 2 k=1 Math Subject Classifications. 65J22, 46N10, 49M05. Keywords and... |

705 |
Convex Analysis and Variational Problems
- Ekeland, Temam
- 1976
(Show Context)
Citation Context ...blem 1 − sL 2 〈u − v, w − v〉 . (3.2) s (3.3) ) Ds(u). (3.4) min v∈H ‖v − u + sF ′ (u)‖ 2 2 + sΦ(v) it immediately follows that the subdifferential inclusion u − sF ′(u) − v ∈ s∂Φ(v) is satisfied, see =-=[13, 29]-=- for an introduction to convex analysis and subdifferential calculus. This can be rewritten to 〈u − sF ′ (u) − v, w − v〉 ≤ s ( Φ(w) − Φ(v) ) for all w ∈ H , while rearranging and dividing by s proves ... |

519 | Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
- Figueiredo, Nowak, et al.
- 2007
(Show Context)
Citation Context ... of the above (non-smooth) minimization problem is not straightforward. There is a vast amount of literature dealing with efficient computational algorithms for equivalent formulations of the problem =-=[8, 12, 14, 16, 21, 22, 27, 33]-=-, both in the infinite-dimensional setting as well as for finitely many dimensions, but mostly for the finitedimensional case. An often-used, simple but apparently slow algorithm is the iterative soft... |

495 | Signal recovery by proximal forwardbackward splitting
- Combettes, Wajs
- 2005
(Show Context)
Citation Context ...imate for the distance to a minimizer to evaluate the fidelity of the outcome of the computations. The convergence proofs in the infinite-dimensional case presented in [7], and for generalizations in =-=[5]-=-, however, do not imply a-priori estimates and do not inherently give any rate of convergence, although, in many cases, linear convergence can be deduced quite easily from the fact that iterative thre... |

450 | Stable recovery of sparse overcomplete representations in the presence of noise
- Donoho, Elad, et al.
- 2006
(Show Context)
Citation Context ...2 ∞∑ αk|〈u, ψk〉| can be rephrased as (1.1) with K = AB. Indeed, solutions of this type of problem admit only finitely many non-zero coefficients and often coincide with the sparsest solution possible =-=[10,18,20]-=-. Unfortunately, the numerical solution of the above (non-smooth) minimization problem is not straightforward. There is a vast amount of literature dealing with efficient computational algorithms for ... |

244 |
Global uniqueness for a two-dimensional inverse boundary value problem
- Nachman
- 1996
(Show Context)
Citation Context ...mples are the Radon transform [25], solution operators for partial differential equations, e.g. in heat conduction problems [6] or inverse boundary value problems like electrical impedance tomography =-=[26]-=-. The combination with a synthesis operator B for an orthonormal basis does not influence the injectivity.4 Kristian Bredies and Dirk A. Lorenz Moreover, the restriction to orthonormal bases can be r... |

239 | A new approach to variable selection in least squares problems
- Osborne, Presnell, et al.
- 2000
(Show Context)
Citation Context ... of the above (non-smooth) minimization problem is not straightforward. There is a vast amount of literature dealing with efficient computational algorithms for equivalent formulations of the problem =-=[8, 12, 14, 16, 21, 22, 27, 33]-=-, both in the infinite-dimensional setting as well as for finitely many dimensions, but mostly for the finitedimensional case. An often-used, simple but apparently slow algorithm is the iterative soft... |

233 |
Monotone Operators in Banach Space and Nonlinear Partial Differential Equations
- Showalter
- 1996
(Show Context)
Citation Context ... u ∗ which satisfies the optimality condition w ∗ = −K ∗ (Ku ∗ − f) ∈ ∂Φ(u ∗ ). As one knows from convex analysis, this can also be formulated pointwise, and Asplund’s characterization of ∂| · | (see =-=[31]-=-, Proposition II.8.6) leads to |w ∗ k | ∗ ≤ αk if u ∗ k = 0 |w ∗ k | ∗ = αk and w ∗ k · u∗k = αk|u ∗ k | if u∗k ̸= 0 where w ∗ k · u∗ k denotes the usual inner product of w∗ k and u∗ k in RN . Now, on... |

135 | Distributed compressed sensing
- Baron, Wakin, et al.
- 2005
(Show Context)
Citation Context ... method leads to a special case of the so-called proximal forward-backward splitting method which amounts to the iteration ( ( u − sn(F ′ (u n ) + b n ) ) + a n − u n) u n+1 = u n + tn Jsn where tn ∈ =-=[0,1]-=- and {a n }, {b n } are absolutely summable sequences in H. In [5], it is shown that this method converges strongly to a minimizer under appropriate conditions. There exist, however, no general statem... |

133 | Recovery of exact sparse representations in the presence of bounded noise
- Fuchs
(Show Context)
Citation Context ...2 ∞∑ αk|〈u, ψk〉| can be rephrased as (1.1) with K = AB. Indeed, solutions of this type of problem admit only finitely many non-zero coefficients and often coincide with the sparsest solution possible =-=[10,18,20]-=-. Unfortunately, the numerical solution of the above (non-smooth) minimization problem is not straightforward. There is a vast amount of literature dealing with efficient computational algorithms for ... |

104 |
Regularization of inverse problems, volume 375 of Mathematics and its Applications
- Engl, Hanke, et al.
- 1996
(Show Context)
Citation Context ...stead of considering the linear equation, a regularized problem is posed for which the solution is stable with respect to noise. A common approach is to regularize by minimizing a Tikhonov functional =-=[7, 15, 28]-=-. A special class of these regularizations has been of recent interest, namely of the type min u∈ℓ 2 ‖Ku − f‖2 ∞∑ + αk|uk| . (1.1) 2 k=1 Math Subject Classifications. 65J22, 46N10, 49M05. Keywords and... |

98 | Highly sparse representations from dictionaries are unique and independent of the sparseness measure
- Gribonval, Nielsen
- 2007
(Show Context)
Citation Context ...2 ∞∑ αk|〈u, ψk〉| can be rephrased as (1.1) with K = AB. Indeed, solutions of this type of problem admit only finitely many non-zero coefficients and often coincide with the sparsest solution possible =-=[10,18,20]-=-. Unfortunately, the numerical solution of the above (non-smooth) minimization problem is not straightforward. There is a vast amount of literature dealing with efficient computational algorithms for ... |

90 |
Convex programming in Hilbert space
- GOLDSTEIN
- 1964
(Show Context)
Citation Context ...on of steepest descent, i.e. the negative gradient. In constrained optimization, the gradient is often projected back to the feasible set, yielding the well-known gradient projection algorithm method =-=[11, 19, 23]-=-. In the following, a step of generalization is introduced: The method is extended to deal with sums of smooth and nonsmooth functionals, and covers in particular constrained smooth minimiza-Linear c... |

80 | Accelerated projected gradient method for linear inverse problems with sparsity constraints
- Daubechies, Fornasier, et al.
- 2008
(Show Context)
Citation Context ...− u n )‖ 2 ≤ 2(1 − δ)‖u n+1 − u n ‖ 2 , (4.5) is sufficient for the above, since one has the estimate (3.3). Together with the boundedness 0 < s ≤ sn, this is exactly the step-size ‘Condition (B)’ in =-=[8]-=-. Hence, as can be easily seen, the choice gives sufficient descent in order to apply Proposition 2. Consequently, linear convergence remains valid for such an ‘accelerated’ iterative soft-thresholdin... |

68 | Zibulevsky M. Coordinate and subspace optimization methods for linear least squares with non-quadratic regularization
- Elad, Matalon
(Show Context)
Citation Context |

46 |
A method for large-scale ℓ1-regularized least squares problems with applications in signal processing and statistics
- Kim, Koh, et al.
- 2007
(Show Context)
Citation Context |

45 |
Global and asymptotic convergence rate estimates for a class of projected gradient processes
- Dunn
- 1981
(Show Context)
Citation Context ...on of steepest descent, i.e. the negative gradient. In constrained optimization, the gradient is often projected back to the feasible set, yielding the well-known gradient projection algorithm method =-=[11, 19, 23]-=-. In the following, a step of generalization is introduced: The method is extended to deal with sums of smooth and nonsmooth functionals, and covers in particular constrained smooth minimiza-Linear c... |

44 |
Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces
- Xu, Roach
- 1991
(Show Context)
Citation Context ...of a norm of a 2-convex Banach space X, i.e. Φ(u) = ‖u‖ p X with p ∈ ]1,2], which is moreover continuously embedded in H, one can show that ‖v − u ∗ ‖ 2 X ≤ C1R(v) holds on each bounded set of X, see =-=[34]-=-. Consequently, with jp = ∂ 1 p‖ · ‖p X denoting the duality mapping with gauge t ↦→ tp−1 , ‖v − u ∗ ‖ 2 ≤ C2‖v − u ∗ ‖ 2 X ( p ≤ C1C2 ‖v‖ X − ‖u∗‖ p X − p〈jp(u ∗ ), v − u ∗ 〉 ) = cR(v) observing that... |

42 |
Constrained minimization problems
- LEVITIN, POLIAK
- 1966
(Show Context)
Citation Context ...on of steepest descent, i.e. the negative gradient. In constrained optimization, the gradient is often projected back to the feasible set, yielding the well-known gradient projection algorithm method =-=[11, 19, 23]-=-. In the following, a step of generalization is introduced: The method is extended to deal with sums of smooth and nonsmooth functionals, and covers in particular constrained smooth minimiza-Linear c... |

42 | Convergence rates and source conditions for Tikhonov regularization with sparsity constraints
- Lorenz
(Show Context)
Citation Context ...FBI property. This property also plays a role in the performance analysis of Newton methods applied to minimization problems with sparsity constraints [21] and error estimates for ℓ 1 -regularization =-=[24]-=-. As we have moreover seen, linear convergence can also be obtained whenever we have convergence a solution with strict sparsity pattern. This result is closely connected with the fact that (1.1), con... |

40 |
Regularization of ill-posed problems in Banach spaces: convergence rates
- Resmerita
- 2005
(Show Context)
Citation Context ...stead of considering the linear equation, a regularized problem is posed for which the solution is stable with respect to noise. A common approach is to regularize by minimizing a Tikhonov functional =-=[7, 15, 28]-=-. A special class of these regularizations has been of recent interest, namely of the type min u∈ℓ 2 ‖Ku − f‖2 ∞∑ + αk|uk| . (1.1) 2 k=1 Math Subject Classifications. 65J22, 46N10, 49M05. Keywords and... |

36 |
A generalized conditional gradient method and its connection to an iterative shrinkage method
- Bredies, Lorenz, et al.
(Show Context)
Citation Context ...ooth minimiza-Linear convergence of iterative soft-thresholding 5 tion problems. The gain is that the iteration (1.2) fits into this generalized framework. Similar to the generalization performed in =-=[4]-=-, its main idea is to replace the constraint by a general proper, convex and lower semi-continuous functional Φ which leads, for the gradient projection method, to the successive application of the as... |

35 | Iterated hard shrinkage for minimization problems with sparsity constraints
- Bredies, Lorenz
(Show Context)
Citation Context ...ses, linear convergence can be deduced quite easily from the fact that iterative thresholding converges strongly and from the special structure of the algorithm. To the best knowledge of the authors, =-=[3]-=- contains the first results about the convergence of iterative algorithms for linear inverse problems with sparsity constraints in infinite dimensions for which the convergence rate is inherent in the... |

31 | A semismooth Newton method for Tikhonov functionals with sparsity constraints
- Griesse, Lorenz
(Show Context)
Citation Context |

29 | An outline of adaptive wavelet Galerkin methods for Tikhonov regularization of inverse parabolic problems, Recent Development in Theories and Numerics
- Dahlke, Maass
- 2003
(Show Context)
Citation Context ...roperty is natural, since the operators A are often injective. Prominent examples are the Radon transform [25], solution operators for partial differential equations, e.g. in heat conduction problems =-=[6]-=- or inverse boundary value problems like electrical impedance tomography [26]. The combination with a synthesis operator B for an orthonormal basis does not influence the injectivity.4 Kristian Bredi... |

25 |
Nonlinear iterative methods for linear ill-posed problems
- Schöpfer, Louis, et al.
(Show Context)
Citation Context ... − u ∗ 〉 + Φ(v) − Φ(u ∗ ) . (3.11) Note that if the subgradient of Φ in u ∗ is unique, R is the Bregman distance of Φ in u ∗ , a notion which is extensively used in the analysis of descent algorithms =-=[2, 30]-=-. Moreover, we make use of the remainder of the Taylor expansion of F, T(v) = F(v) − F(u ∗ ) − 〈F ′ (u ∗ ), v − u ∗ 〉 . (3.12) Remark 5 (On the Bregman distance). In many cases the Bregmanlike distanc... |

21 |
Combettes, Bregman monotone optimization algorithms
- Bauschke, Borwein, et al.
- 2003
(Show Context)
Citation Context ... − u ∗ 〉 + Φ(v) − Φ(u ∗ ) . (3.11) Note that if the subgradient of Φ in u ∗ is unique, R is the Bregman distance of Φ in u ∗ , a notion which is extensively used in the analysis of descent algorithms =-=[2, 30]-=-. Moreover, we make use of the remainder of the Taylor expansion of F, T(v) = F(v) − F(u ∗ ) − 〈F ′ (u ∗ ), v − u ∗ 〉 . (3.12) Remark 5 (On the Bregman distance). In many cases the Bregmanlike distanc... |

15 |
On algorithms for solving least squares problems under an L penalty or an L constraint
- Turlach
- 2005
(Show Context)
Citation Context |

14 | An iterative algorithm for nonlinear inverse problems with joint sparsity constraints in vector valued regimes and an application to color image inpainting, Inverse Problems
- Teschke, Ramlau
- 2007
(Show Context)
Citation Context ...ing the broad range of applications.18 Kristian Bredies and Dirk A. Lorenz 5.1 Joint sparsity constraints First, we consider the situation of so-called joint sparsity for vector-valued problems, see =-=[1,17,32]-=-. The problems considered are set in the Hilbert space (ℓ2) N for some N ≥ 1 which is interpreted such that for u ∈ (ℓ2) N the k-th component uk is a vector in RN . Given a linear and continuous opera... |

11 |
The interior Radon transform
- Maass
(Show Context)
Citation Context ...ith the FBI property). In the context of inverse problems with sparsity constraints, the FBI property is natural, since the operators A are often injective. Prominent examples are the Radon transform =-=[25]-=-, solution operators for partial differential equations, e.g. in heat conduction problems [6] or inverse boundary value problems like electrical impedance tomography [26]. The combination with a synth... |

11 |
Rockafellar and Roger J-B Wets. Variational Analysis
- Tyrell
- 1998
(Show Context)
Citation Context ...blem 1 − sL 2 〈u − v, w − v〉 . (3.2) s (3.3) ) Ds(u). (3.4) min v∈H ‖v − u + sF ′ (u)‖ 2 2 + sΦ(v) it immediately follows that the subdifferential inclusion u − sF ′(u) − v ∈ s∂Φ(v) is satisfied, see =-=[13, 29]-=- for an introduction to convex analysis and subdifferential calculus. This can be rewritten to 〈u − sF ′ (u) − v, w − v〉 ≤ s ( Φ(w) − Φ(v) ) for all w ∈ H , while rearranging and dividing by s proves ... |

7 |
A.M.: Approximate Methods in Optimization Problems. Number 32
- Demyanov, Rubinov
- 1970
(Show Context)
Citation Context ... closed and convex constraint, yields the classical gradient projection method which is known to converge provided that certain assumptions are fulfilled and a suitable step-size rule has been chosen =-=[9,11]-=-. In the following, we assume that F is differentiable, F ′ is Lipschitz continuous with constant L and usually choose the step-sizes such that 0 < s ≤ sn ≤ s < 2/L. (2.3) Note that form the trivial c... |

4 |
Fornasier and Holger Rauhut. Recovery algorithms for vector valued data with joint sparsity constraints
- Massimo
(Show Context)
Citation Context ...ing the broad range of applications.18 Kristian Bredies and Dirk A. Lorenz 5.1 Joint sparsity constraints First, we consider the situation of so-called joint sparsity for vector-valued problems, see =-=[1,17,32]-=-. The problems considered are set in the Hilbert space (ℓ2) N for some N ≥ 1 which is interpreted such that for u ∈ (ℓ2) N the k-th component uk is a vector in RN . Given a linear and continuous opera... |