• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Error estimation for Bregman iterations and inverse scale space methods in image restoration (2007)

by Martin Burger, Elena Resmerita, Lin He
Venue:Computing
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 21
Next 10 →

Regularization With Non-convex Separable Constraints

by Kristian Bredies, Dirk A. Lorenz , 2009
"... ..."
Abstract - Cited by 128 (5 self) - Add to MetaCart
Abstract not found

Convergence rates and source conditions for Tikhonov regularization with sparsity constraints

by Dirk A. Lorenz , 2008
"... This paper addresses the regularization by sparsity constraints by means of weighted ℓ p penalties for 0 ≤ p ≤ 2. For 1 ≤ p ≤ 2 special attention is payed to convergence rates in norm and to source conditions. As main results it is proven that one gets a convergence rate of √ δ in the 2-norm for 1 & ..."
Abstract - Cited by 44 (15 self) - Add to MetaCart
This paper addresses the regularization by sparsity constraints by means of weighted ℓ p penalties for 0 ≤ p ≤ 2. For 1 ≤ p ≤ 2 special attention is payed to convergence rates in norm and to source conditions. As main results it is proven that one gets a convergence rate of √ δ in the 2-norm for 1 < p ≤ 2 and in the 1-norm for p = 1 as soon as the unknown solution is sparse. The case p = 1 needs a special technique where not only Bregman distances but also a so-called Bregman-Taylor distance has to be employed. For p < 1 only preliminary results are shown. These results indicate that, different from p ≥ 1, the regularizing properties depend on the interplay of the operator and the basis of sparsity. A counterexample for p = 0 shows that regularization need not to happen.
(Show Context)

Citation Context

...In this paper we are going to discuss the regularizing properties of sparsity constraints. First results on this topic can be found in [9] and also in the framework of regularization in Banach spaces =-=[7,8,16,20,21]-=-. While convergence rates in [7,8,16,20,21] are given in term of Bregman distances we are interested in convergence rates in norm. In [9] convergence rates in norm are given but under implicit source ...

Convergence of the Linearized Bregman Iteration for ℓ1-norm Minimization

by Jian-feng Cai, Stanley Osher, Zuowei Shen , 2008
"... Abstract. One of the key steps in compressed sensing is to solve the basis pursuit problem minu∈R n{�u�1: Au = f}. Bregman iteration was very successfully used to solve this problem in [40]. Also, a simple and fast iterative algorithm based on linearized Bregman iteration was proposed in [40], which ..."
Abstract - Cited by 25 (8 self) - Add to MetaCart
Abstract. One of the key steps in compressed sensing is to solve the basis pursuit problem minu∈R n{�u�1: Au = f}. Bregman iteration was very successfully used to solve this problem in [40]. Also, a simple and fast iterative algorithm based on linearized Bregman iteration was proposed in [40], which is described in detail with numerical simulations in [35]. A convergence analysis of the smoothed version of this algorithm was given in [11]. The purpose of this paper is to prove that the linearized Bregman iteration proposed in [40] for the basis pursuit problem indeed converges. 1.
(Show Context)

Citation Context

...ER, AND ZUOWEI SHEN explained in [29] why (2.4) with J(u) = �u�1 is particularly efficient in solving (1.1). The convergence and error analysis of the Bregman iteration were studied in, for examples, =-=[4,34,37,40]-=-. It was pointed out in [40] that the Bregman iteration (2.3) or (2.4) is equivalent to an augmented Lagrangian method in [1, 28, 30, 33, 36]. However, in the second step of (2.4), we need to solve a ...

Error estimates for general fidelities

by Martin Benning, Martin Burger - Electronic Transactions on Numerical Analysis
"... Abstract. Appropriate error estimation for regularization methods in imaging and inverse problems is of enormous importance for controlling approximation properties and understanding types of solutions that are particularly favoured. In the case of linear problems, i.e., variational methods with qua ..."
Abstract - Cited by 10 (4 self) - Add to MetaCart
Abstract. Appropriate error estimation for regularization methods in imaging and inverse problems is of enormous importance for controlling approximation properties and understanding types of solutions that are particularly favoured. In the case of linear problems, i.e., variational methods with quadratic fidelity and quadratic regularization, the error estimation is well-understood under so-called source conditions. Significant progress for nonquadratic regularization functionals has been made recently after the introduction of the Bregman distance as an appropriate error measure. The other important generalization, namely for nonquadratic fidelities, has not been analyzed so far. In this paper we develop a framework for the derivation of error estimates in the case of rather general fidelities and highlight the importance of duality for the shape of the estimates. We then specialize the approach for several important fidelities in imaging (L 1, Kullback-Leibler).
(Show Context)

Citation Context

...rks deal with the analysis and error propagation by considering the Bregman distance between û satisfying the optimality condition of a variational regularization method and the exact solution ũ; cf. =-=[7, 9, 17, 21, 22, 28]-=-. The Bregman distance turned out to be an adequate error measure since it seems to control only those errors that can be distinguished by the regularization term. This point of view is supported by t...

Optimal Convergence Rates for Tikhonov Regularization In Besov Scales

by D A Lorenz, D Trede , 2008
"... ..."
Abstract - Cited by 9 (2 self) - Add to MetaCart
Abstract not found

Ground States and Singular Vectors of Convex Variational Regularization Methods

by Martin Benning, Martin Burger , 2012
"... Singular value decomposition is the key tool in the analysis and understanding of linear regularization methods in Hilbert spaces. Besides simplifying computations it allows to provide a good understanding of properties of the forward problem compared to the prior information introduced by the regul ..."
Abstract - Cited by 5 (2 self) - Add to MetaCart
Singular value decomposition is the key tool in the analysis and understanding of linear regularization methods in Hilbert spaces. Besides simplifying computations it allows to provide a good understanding of properties of the forward problem compared to the prior information introduced by the regularization methods. In the last decade nonlinear variational approaches such as ℓ 1 or total variation regularizations became quite prominent regularization techniques with certain properties being superior to standard methods. In the analysis of those, singular values and vectors did not play any role so far, for the obvious reason that these problems are nonlinear, together with the issue of defining singular values and singular vectors in the first place. In this paper however we want to start a study of singular values and vectors for nonlinear variational regularization of linear inverse problems, with particular focus on singular onehomogeneous regularization functionals. A major role is played by the smallest singular value, which we define as the ground state of an appropriate functional combining the (semi-)norm introduced by the forward operator and the regularization functional. The optimality condition for the ground state further yields a natural generalization to higher singular values
(Show Context)

Citation Context

...3, 32]). Various advances in the analysis of such regularization methods have been made over the last years, ranging from basic regularization properties (cf. e.g. [1, 28]) over error estimation (cf. =-=[22, 23, 56, 57, 11, 43, 49]-=-) to corrections of inherent bias by iterative and time-flow techniques (cf. [60, 54, 19, 18, 71]). Singular values and vectors did so far not play any role in the analysis of such methods and it is c...

Error estimation for variational models with non-Gaussian noise, preprint

by Martin Benning, Martin Burger
"... Appropriate error estimation for regularization methods in imaging and inverse problems is of enormous importance for controlling approximation properties and understanding types of solutions that are particularly favoured. In the case of linear problems, i.e. variational methods with quadratic fide ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
Appropriate error estimation for regularization methods in imaging and inverse problems is of enormous importance for controlling approximation properties and understanding types of solutions that are particularly favoured. In the case of linear problems, i.e. variational methods with quadratic fidelity and quadratic regularization, the error estimation is well-understood under so-called source conditions. Significant progress for nonquadratic regularization functionals has been made recently after the introduction of the Bregman distance as an appropriate error measure. The other important generalization, namely for nonquadratic fidelities such as those arising from Bayesian models with non-Gaussian noise, has not been analyzed so far. In this paper we develop a framework for the derivation of error estimates in the case of rather general fidelities and highlight the importance of duality for the shape of the estimates. We then specialize the approach for several important noise models in imaging (Poisson, Laplacian, Multiplicative) and the corresponding Bayesian MAP estimation.

Convergence rates in ℓ 1 -regularization if the sparsity assumption fails

by Martin Burger, Jens Flemming, Bernd Hofmann - Inverse Problems
"... Variational sparsity regularization based on ℓ 1-norms and other nonlinear functionals has gained enormous attention recently, both with respect to its applications and its mathematical analysis. A focus in regularization theory has been to develop error estimation in terms of regularization paramet ..."
Abstract - Cited by 3 (3 self) - Add to MetaCart
Variational sparsity regularization based on ℓ 1-norms and other nonlinear functionals has gained enormous attention recently, both with respect to its applications and its mathematical analysis. A focus in regularization theory has been to develop error estimation in terms of regularization parameter and noise strength. For this sake specific error measures such as Bregman distances and specific conditions on the solution such as source conditions or variational inequalities have been developed and used. In this paper we provide, for a certain class of ill-posed linear operator equations, a convergence analysis that works for solutions that are not completely sparse, but have a fast decaying nonzero part. This case is not covered by standard source conditions, but surprisingly can be treated with an appropriate variational inequality. As a consequence the paper also provides the first examples where the variational inequality approach, which was often believed to be equivalent to appropriate source conditions, can indeed go farther than the latter.
(Show Context)

Citation Context

... presence of noise. There is a comprehensive literature concerning the ℓ 1 -regularization of ill-posed problems under sparsity constraints including assertions on convergence rates (cf. e.g. [7] and =-=[2, 6, 4, 12, 14, 15, 21, 22, 24, 25, 23, 26]-=-). A natural question arising in problems of this type is the asymptotic analysis of such variational problems as α → 0 respectively y δ → y, where y are the data that would be produced by an exact so...

Convergence rates for regularization with sparsity constraints

by Ronny Ramlau, Elena Resmerita , 2009
"... rates for regularization with sparsity constraints ..."
Abstract - Cited by 3 (1 self) - Add to MetaCart
rates for regularization with sparsity constraints

Discretization of variational regularization in Banach spaces

by Elena Resmerita, Otmar Scherzer - Inverse Probl
"... ar ..."
Abstract - Cited by 3 (0 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...ces and their role in optimization and inverse problems can be found in [28]. Error estimates for variational or iterative regularization of (1) by means of a non-quadratic penalty have been shown in =-=[6, 28, 29, 18, 7, 16]-=-. The Bregman distance DR associated with R was naturally chosen as the measure of discrepancy between the error estimates. We assume Frechet differentiability of the operator F around ū which is con...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University