Results 1  10
of
41
Greedy solution of illposed problems: Error bounds and exact inversion
, 2009
"... ..."
(Show Context)
Minimization of Nonsmooth, Nonconvex Functionals by Iterative Thresholding
, 2009
"... Preprint 10The consecutive numbering of the publications is determined by their chronological order. The aim of this preprint series is to make new research rapidly available for scientific discussion. Therefore, the responsibility for the contents is solely due to the authors. The publications will ..."
Abstract

Cited by 127 (2 self)
 Add to MetaCart
(Show Context)
Preprint 10The consecutive numbering of the publications is determined by their chronological order. The aim of this preprint series is to make new research rapidly available for scientific discussion. Therefore, the responsibility for the contents is solely due to the authors. The publications will be distributed by the authors. Minimization of nonsmooth, nonconvex functionals by iterative thresholding
An Analysis of Electrical Impedance Tomography with Applications to Tikhonov Regularization
, 2010
"... The consecutive numbering of the publications is determined by their chronological order. The aim of this preprint series is to make new research rapidly available for scientific discussion. Therefore, the responsibility for the contents is solely due to the authors. The publications will be distrib ..."
Abstract

Cited by 79 (2 self)
 Add to MetaCart
(Show Context)
The consecutive numbering of the publications is determined by their chronological order. The aim of this preprint series is to make new research rapidly available for scientific discussion. Therefore, the responsibility for the contents is solely due to the authors. The publications will be distributed by the authors.
Linear convergence of iterative softthresholding
 J. Fourier Anal. Appl
"... ABSTRACT. In this article a unified approach to iterative softthresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented. We formulate the algorithm in the framework of generalized gradient methods and present a new convergence analysis ..."
Abstract

Cited by 58 (13 self)
 Add to MetaCart
(Show Context)
ABSTRACT. In this article a unified approach to iterative softthresholding algorithms for the solution of linear operator equations in infinite dimensional Hilbert spaces is presented. We formulate the algorithm in the framework of generalized gradient methods and present a new convergence analysis. As main result we show that the algorithm converges with linear rate as soon as the underlying operator satisfies the socalled finite basis injectivity property or the minimizer possesses a socalled strict sparsity pattern. Moreover it is shown that the constants can be calculated explicitly in special cases (i.e. for compact operators). Furthermore, the techniques also can be used to establish linear convergence for related methods such as the iterative thresholding algorithm for joint sparsity and the accelerated gradient projection method. 1.
A semismooth Newton method for Tikhonov functionals with sparsity constraints
, 2007
"... ..."
(Show Context)
ElasticNet Regularization: Error estimates and Active Set Methods
, 905
"... This paper investigates theoretical properties and efficient numerical algorithms for the socalled elasticnet regularization originating from statistics, which enforces simultaneously ℓ 1 and ℓ 2 regularization. The stability of the minimizer and its consistency are studied, and convergence rates ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
(Show Context)
This paper investigates theoretical properties and efficient numerical algorithms for the socalled elasticnet regularization originating from statistics, which enforces simultaneously ℓ 1 and ℓ 2 regularization. The stability of the minimizer and its consistency are studied, and convergence rates for both a priori and a posteriori parameter choice rules are established. Two iterative numerical algorithms of active set type are proposed, and their convergence properties are discussed. Numerical results are presented to illustrate the features of the functional and algorithms. 1
Error estimates for general fidelities
 Electronic Transactions on Numerical Analysis
"... Abstract. Appropriate error estimation for regularization methods in imaging and inverse problems is of enormous importance for controlling approximation properties and understanding types of solutions that are particularly favoured. In the case of linear problems, i.e., variational methods with qua ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Appropriate error estimation for regularization methods in imaging and inverse problems is of enormous importance for controlling approximation properties and understanding types of solutions that are particularly favoured. In the case of linear problems, i.e., variational methods with quadratic fidelity and quadratic regularization, the error estimation is wellunderstood under socalled source conditions. Significant progress for nonquadratic regularization functionals has been made recently after the introduction of the Bregman distance as an appropriate error measure. The other important generalization, namely for nonquadratic fidelities, has not been analyzed so far. In this paper we develop a framework for the derivation of error estimates in the case of rather general fidelities and highlight the importance of duality for the shape of the estimates. We then specialize the approach for several important fidelities in imaging (L 1, KullbackLeibler).
Sparse regularization with l q penalty term
 Inverse Probl
, 2008
"... We consider the stable approximation of sparse solutions to nonlinear operator equations by means of Tikhonov regularization with a subquadratic penalty term. Imposing certain assumptions, which for a linear operator are equivalent to the standard range condition, we derive the usual convergence ra ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
We consider the stable approximation of sparse solutions to nonlinear operator equations by means of Tikhonov regularization with a subquadratic penalty term. Imposing certain assumptions, which for a linear operator are equivalent to the standard range condition, we derive the usual convergence rate O ( √ δ) of the regularized solutions in dependence of the noise level δ. Particular emphasis lies on the case, where the true solution is known to have a sparse representation in a given basis. In this case, if the differential of the operator satisfies a certain injectivity condition, we can show that the actual convergence rate improves up to O(δ). MSC: 65J20; 65J22, 49N45.
Heuristic ParameterChoice Rules for Convex Variational Regularization Based on Error Estimates
 SIAM Jounal on Numerical Analysis
, 2010
"... estimates ..."
(Show Context)