Results 1  10
of
21
Convergence rates and source conditions for Tikhonov regularization with sparsity constraints
, 2008
"... This paper addresses the regularization by sparsity constraints by means of weighted ℓ p penalties for 0 ≤ p ≤ 2. For 1 ≤ p ≤ 2 special attention is payed to convergence rates in norm and to source conditions. As main results it is proven that one gets a convergence rate of √ δ in the 2norm for 1 & ..."
Abstract

Cited by 44 (15 self)
 Add to MetaCart
(Show Context)
This paper addresses the regularization by sparsity constraints by means of weighted ℓ p penalties for 0 ≤ p ≤ 2. For 1 ≤ p ≤ 2 special attention is payed to convergence rates in norm and to source conditions. As main results it is proven that one gets a convergence rate of √ δ in the 2norm for 1 < p ≤ 2 and in the 1norm for p = 1 as soon as the unknown solution is sparse. The case p = 1 needs a special technique where not only Bregman distances but also a socalled BregmanTaylor distance has to be employed. For p < 1 only preliminary results are shown. These results indicate that, different from p ≥ 1, the regularizing properties depend on the interplay of the operator and the basis of sparsity. A counterexample for p = 0 shows that regularization need not to happen.
Convergence of the Linearized Bregman Iteration for ℓ1norm Minimization
, 2008
"... Abstract. One of the key steps in compressed sensing is to solve the basis pursuit problem minu∈R n{�u�1: Au = f}. Bregman iteration was very successfully used to solve this problem in [40]. Also, a simple and fast iterative algorithm based on linearized Bregman iteration was proposed in [40], which ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
(Show Context)
Abstract. One of the key steps in compressed sensing is to solve the basis pursuit problem minu∈R n{�u�1: Au = f}. Bregman iteration was very successfully used to solve this problem in [40]. Also, a simple and fast iterative algorithm based on linearized Bregman iteration was proposed in [40], which is described in detail with numerical simulations in [35]. A convergence analysis of the smoothed version of this algorithm was given in [11]. The purpose of this paper is to prove that the linearized Bregman iteration proposed in [40] for the basis pursuit problem indeed converges. 1.
Error estimates for general fidelities
 Electronic Transactions on Numerical Analysis
"... Abstract. Appropriate error estimation for regularization methods in imaging and inverse problems is of enormous importance for controlling approximation properties and understanding types of solutions that are particularly favoured. In the case of linear problems, i.e., variational methods with qua ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Appropriate error estimation for regularization methods in imaging and inverse problems is of enormous importance for controlling approximation properties and understanding types of solutions that are particularly favoured. In the case of linear problems, i.e., variational methods with quadratic fidelity and quadratic regularization, the error estimation is wellunderstood under socalled source conditions. Significant progress for nonquadratic regularization functionals has been made recently after the introduction of the Bregman distance as an appropriate error measure. The other important generalization, namely for nonquadratic fidelities, has not been analyzed so far. In this paper we develop a framework for the derivation of error estimates in the case of rather general fidelities and highlight the importance of duality for the shape of the estimates. We then specialize the approach for several important fidelities in imaging (L 1, KullbackLeibler).
Ground States and Singular Vectors of Convex Variational Regularization Methods
, 2012
"... Singular value decomposition is the key tool in the analysis and understanding of linear regularization methods in Hilbert spaces. Besides simplifying computations it allows to provide a good understanding of properties of the forward problem compared to the prior information introduced by the regul ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Singular value decomposition is the key tool in the analysis and understanding of linear regularization methods in Hilbert spaces. Besides simplifying computations it allows to provide a good understanding of properties of the forward problem compared to the prior information introduced by the regularization methods. In the last decade nonlinear variational approaches such as ℓ 1 or total variation regularizations became quite prominent regularization techniques with certain properties being superior to standard methods. In the analysis of those, singular values and vectors did not play any role so far, for the obvious reason that these problems are nonlinear, together with the issue of defining singular values and singular vectors in the first place. In this paper however we want to start a study of singular values and vectors for nonlinear variational regularization of linear inverse problems, with particular focus on singular onehomogeneous regularization functionals. A major role is played by the smallest singular value, which we define as the ground state of an appropriate functional combining the (semi)norm introduced by the forward operator and the regularization functional. The optimality condition for the ground state further yields a natural generalization to higher singular values
Error estimation for variational models with nonGaussian noise, preprint
"... Appropriate error estimation for regularization methods in imaging and inverse problems is of enormous importance for controlling approximation properties and understanding types of solutions that are particularly favoured. In the case of linear problems, i.e. variational methods with quadratic fide ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Appropriate error estimation for regularization methods in imaging and inverse problems is of enormous importance for controlling approximation properties and understanding types of solutions that are particularly favoured. In the case of linear problems, i.e. variational methods with quadratic fidelity and quadratic regularization, the error estimation is wellunderstood under socalled source conditions. Significant progress for nonquadratic regularization functionals has been made recently after the introduction of the Bregman distance as an appropriate error measure. The other important generalization, namely for nonquadratic fidelities such as those arising from Bayesian models with nonGaussian noise, has not been analyzed so far. In this paper we develop a framework for the derivation of error estimates in the case of rather general fidelities and highlight the importance of duality for the shape of the estimates. We then specialize the approach for several important noise models in imaging (Poisson, Laplacian, Multiplicative) and the corresponding Bayesian MAP estimation.
Convergence rates in ℓ 1 regularization if the sparsity assumption fails
 Inverse Problems
"... Variational sparsity regularization based on ℓ 1norms and other nonlinear functionals has gained enormous attention recently, both with respect to its applications and its mathematical analysis. A focus in regularization theory has been to develop error estimation in terms of regularization paramet ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Variational sparsity regularization based on ℓ 1norms and other nonlinear functionals has gained enormous attention recently, both with respect to its applications and its mathematical analysis. A focus in regularization theory has been to develop error estimation in terms of regularization parameter and noise strength. For this sake specific error measures such as Bregman distances and specific conditions on the solution such as source conditions or variational inequalities have been developed and used. In this paper we provide, for a certain class of illposed linear operator equations, a convergence analysis that works for solutions that are not completely sparse, but have a fast decaying nonzero part. This case is not covered by standard source conditions, but surprisingly can be treated with an appropriate variational inequality. As a consequence the paper also provides the first examples where the variational inequality approach, which was often believed to be equivalent to appropriate source conditions, can indeed go farther than the latter.
Convergence rates for regularization with sparsity constraints
, 2009
"... rates for regularization with sparsity constraints ..."
Discretization of variational regularization in Banach spaces
 Inverse Probl
"... ar ..."
(Show Context)