• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Parameter selection for total variation based image restoration using discrepancy principle (2012)

by Y Wen, R Chan
Venue:IEEE Transactions on Image Processing
Add To MetaCart

Tools

Sorted by:
Results 1 - 5 of 5

Minimization and parameter estimation for seminorm regularization models with I-divergence constraints

by T. Teuber, G. Steidl, R. H. Chan , 2012
"... In this papers we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b,H·) for rather general linear operators H and L. The I-divergence is also known as Kullback-Leibler divergence and appears in many models in imaging science, in particular when d ..."
Abstract - Cited by 13 (2 self) - Add to MetaCart
In this papers we analyze the minimization of seminorms ‖L · ‖ on R n under the constraint of a bounded I-divergence D(b,H·) for rather general linear operators H and L. The I-divergence is also known as Kullback-Leibler divergence and appears in many models in imaging science, in particular when dealing with Poisson data. Often H represents, e.g., a linear blur operator and L is some discrete derivative or frame analysis operator. We prove relations between the the parameters of I-divergence constrained and penalized problems without assuming the uniqueness of their minimizers. To solve the I-divergence constrained problem we apply first-order primal-dual algorithms which reduce the problem to the solution of certain proximal minimization problems in each iteration step. One of these proximation problems is an I-divergence constrained least squares problem which can be solved based on Morosov’s discrepancy principle by a Newton method. Interestingly, the algorithm produces not only a sequence of vectors which converges to a minimizer of the constrained problem but also a sequence of parameters which convergences to a regularization parameter so that the corresponding penalized problem has the same solution as our constrained one. We demonstrate the performance of various algorithms for different image restoration tasks both for images corrupted by Poisson noise and multiplicative Gamma noise. 1
(Show Context)

Citation Context

...ge on the noise allows to estimate its parameter τ better than the regularization parameter λ of the penalized model. In particular, argmin x∈R n { TV(x) subject to ‖Hx−b‖ 2 2 ≤ τ } was considered in =-=[52, 71]-=-, where the authors in [16] consider the problem from the point of view of the penalized problem (2). But rather than fixing λ in all iterations, they tune λ in each iteration step such that the corre...

Homogeneous Penalizers and Constraints in Convex Image Restoration

by R. Ciak, et al. , 2012
"... Recently convex optimization models were successfully applied for solving various prob-lems in image analysis and restoration. In this paper, we are interested in relations between convex constrained optimization problems of the form argmin{Φ(x) subject to Ψ(x) ≤ τ} and their penalized counterparts ..."
Abstract - Cited by 4 (1 self) - Add to MetaCart
Recently convex optimization models were successfully applied for solving various prob-lems in image analysis and restoration. In this paper, we are interested in relations between convex constrained optimization problems of the form argmin{Φ(x) subject to Ψ(x) ≤ τ} and their penalized counterparts argmin{Φ(x) + λΨ(x)}. We recall general results on the topic by the help of an epigraphical projection. Then we deal with the special setting Ψ: = ‖L · ‖ with L ∈ Rm,n and Φ: = ϕ(H ·), where H ∈ Rn,n and ϕ: Rn → R∪{+∞} meet certain requirements which are often fulfilled in image process-ing models. In this case we prove by incorporating the dual problems that there exists a bijective function such that the solutions of the constrained problem coincide with those of the penalized problem if and only if τ and λ are in the graph of this function. We illustrate the relation between τ and λ for various problems arising in image processing. In particular, we point out the relation to the Pareto frontier for joint sparsity problems. We demonstrate the performance of the constrained model in restoration tasks of images corrupted by Poisson noise with the I-divergence as data fitting term ϕ and in inpainting models with the constrained nuclear norm. Such models can be useful if we have a priori knowledge on the image rather than on the noise level.

astronomy by l 1

by Raymond H. Chan, Xiaoming Yuan, Wenxing Zhang
"... Point spread function reconstruction in ground-based ..."
Abstract - Add to MetaCart
Point spread function reconstruction in ground-based
(Show Context)

Citation Context

...a reasonable α, e.g. the generalized cross validation method [31] and the discrepancy principle [32]. For the l1-lp models, how to choose a good α is still an on-going research, e.g. the recent paper =-=[33]-=- can give a good α for l1-l2 model provided that the level of the white noise can be estimated. Note that models (6) and (7) are models for solving the phase gradients, and hence the best α obtained f...

An Iterative Linear Expansion of Thresholds for 1-Based Image Restoration

by Hanjie Pan, Student Member, Thierry Blu
"... Abstract — This paper proposes a novel algorithmic framework to solve image restoration problems under sparsity assumptions. As usual, the reconstructed image is the minimum of an objective functional that consists of a data fidelity term and an 1 regularization. However, instead of estimating the r ..."
Abstract - Add to MetaCart
Abstract — This paper proposes a novel algorithmic framework to solve image restoration problems under sparsity assumptions. As usual, the reconstructed image is the minimum of an objective functional that consists of a data fidelity term and an 1 regularization. However, instead of estimating the reconstructed image that minimizes the objective functional directly, we focus on the restoration process that maps the degraded measurements to the reconstruction. Our idea amounts to parameterize the process as a linear combination of few elementary thresholding functions (LET) and to solve the linear weighting coefficients by minimizing the objective functional. It is then possible to update the thresholding functions and to iterate this process (i-LET). The key advantage of such a linear parametrization is that the problem size reduces dramatically—each time we only need to solve an optimization problem over the dimension of the linear coefficients (typically less than 10) instead of the whole image dimension. With the elementary thresholding functions satisfying certain constraints, a global convergence of the iterated LET algorithm is guaranteed. Experiments on several test images over a wide range of noise levels and different types of convolution kernels clearly indicate that the proposed framework usually outperforms state-of-the-art algorithms in terms of both the CPU time and the number of iterations. Index Terms — Image restoration, sparsity, majorization min-imization (MM), iterative reweighted least square (IRLS), thresholding, linear expansion of thresholds (LET).
(Show Context)

Citation Context

...ation weight and the data-fidelity—see Section IV-D. (convolution kernel: hi, j = 1/(1 + i2 + j2) for i, j = −7, . . . , 7). similar approach in TV-based deconvolution, see [57] and an alternative in =-=[58]-=-). Experimentally, the algorithm converges to a solution that satisfies the constraint ‖y − HWc‖22 ≤ ε2 at the same rate of convergence as in the case when the fixed λopt is used in the unconstrained ...

SUBMITTED TO IEEE TRANSACTIONS ON IMAGE PROCESSING 1 An Iterative Linear Expansion of Thresholds for `1-based Image Restoration

by Hanjie Pan, Student Member, Thierry Blu
"... Abstract—This paper proposes a novel algorithmic framework to solve image restoration problems under sparsity assumptions. As usual, the reconstructed image is the minimum of an objective functional that consists of a data fidelity term and an `1 regularization. However, instead of estimating the re ..."
Abstract - Add to MetaCart
Abstract—This paper proposes a novel algorithmic framework to solve image restoration problems under sparsity assumptions. As usual, the reconstructed image is the minimum of an objective functional that consists of a data fidelity term and an `1 regularization. However, instead of estimating the reconstructed image that minimizes the objective functional directly, we focus on the restoration process that maps the degraded measurements to the reconstruction. Our idea amounts to parameterizing the process as a linear combination of few elementary thresholding functions (LET) and solve for the linear weighting coefficients by minimizing the objective functional. It is then possible to update the thresholding functions and to iterate this process (i-LET). The key advantage of such a linear parametrization is that the problem size reduces dramatically—each time we only need to solve an optimization problem over the dimension of the linear coefficients (typically less than 10) instead of the whole image dimension. With the elementary thresholding functions satisfying certain constraints, global convergence of the iterated LET algorithm is guaranteed. Experiments on several test images over a wide range of noise levels and different types of convolution kernels clearly indicate that the proposed framework usually outperform state-of-the-art algorithms in terms of both CPU time and number of iterations. Index Terms—Image restoration, sparsity, majorization mini-mization (MM), iterative reweighted least square (IRLS), thresh-olding, linear expansion of thresholds (LET).
(Show Context)

Citation Context

...a-fidelity and the noise energy threshold ε2 as λ(n+1) = ε2 ‖y−HWc(n)‖22λ (n) with the current iteration results c(n) (for a similar approach in TV-based deconvolution, see [57] and an alternative in =-=[58]-=-). Experimentally, the algorithm converges to a solution that satisfies the constraint ‖y −HWc‖22 ≤ ε2 at the same rate of convergence as in the case when the fixed λopt is used in the unconstrained f...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University