Results 1  10
of
27
Nonnegative Matrix Factorization with Constrained Second Order Optimization
, 2007
"... Nonnegative Matrix Factorization (NMF) solves the following problem: find nonnegative matrices A ∈ R M×R X ∈ R R×T + such that Y ∼ = AX, given only Y ∈ R M×T and the assigned index R. This method has found a wide spectrum of applications in signal and image processing, such as blind source separati ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Nonnegative Matrix Factorization (NMF) solves the following problem: find nonnegative matrices A ∈ R M×R X ∈ R R×T + such that Y ∼ = AX, given only Y ∈ R M×T and the assigned index R. This method has found a wide spectrum of applications in signal and image processing, such as blind source separation, spectra recovering, pattern recognition, segmentation or clustering. Such a factorization is usually performed with an alternating gradient descent technique that is applied to the squared Euclidean distance or KullbackLeibler divergence. This approach has been used in the widely known LeeSeung NMF algorithms that belong to a class of multiplicative iterative algorithms. It is wellknown that these algorithms, in spite of their low complexity, are slowlyconvergent, give only a positive solution (not nonnegative), and can easily fall in to local minima of a nonconvex cost function. In this paper, we propose to take advantage of the second order terms of a cost function to overcome the disadvantages of gradient (multiplicative) algorithms. First, a projected quasiNewton method is presented, where a regularized Hessian with the LevenbergMarquardt approach is inverted with the Qless QR decomposition. Since the matrices A and/or X are usually sparse, a more sophisticated hybrid approach based on the Gradient Projection Conjugate Gradient (GPCG) algorithm, which was invented by More and Toraldo, is adapted for NMF. The Gradient Projection (GP) method is exploited to find zerovalue components (active), and then the Newton steps are taken only to compute positive components (inactive) with the Conjugate Gradient (CG) method. As a cost function, we used the αdivergence that unifies many wellknown cost functions. We applied our new NMF method to a Blind Source Separation (BSS) problem with mixed signals and images. The results demonstrate the high robustness of our method.
Regularization parameter selection methods for illposed Poisson maximum likelihood estimation. Inverse Problems
, 2009
"... Abstract. In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image datanoise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for i ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
(Show Context)
Abstract. In image processing applications, image intensity is often measured via the counting of incident photons emitted by the object of interest. In such cases, image datanoise is accurately modeled by a Poisson distribution. This motivates the use of Poisson maximum likelihood estimation for image reconstruction. However, when the underlying model equation is illposed, regularization is needed. Regularized Poisson likelihood estimation has been studied extensively by the authors, though a problem of high importance remains: the choice of the regularization parameter. We will present three statistically motivated methods for choosing the regularization parameter, and numerical examples will be presented to illustrate their effectiveness.
Total VariationPenalized Poisson Likelihood Estimation for IllPosed Problems, accepted
 in Advances in Computational Mathematics, Special Issue on Mathematical Imaging
"... Abstract. The noise contained in data measured by imaging instruments is often primarily of Poisson type. This motivates, in many cases, the use of the Poisson likelihood functional in place of the ubiquitous least squares data fidelity when solving image deblurring problems. We assume that the unde ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
(Show Context)
Abstract. The noise contained in data measured by imaging instruments is often primarily of Poisson type. This motivates, in many cases, the use of the Poisson likelihood functional in place of the ubiquitous least squares data fidelity when solving image deblurring problems. We assume that the underlying blurring operator is compact, so that, as in the least squares case, the resulting minimization problem is illposed and must be regularized. In this paper, we focus on total variation regularization and show that the problem of computing the minimizer of the resulting total variationpenalized Poisson likelihood functional is wellposed. We then prove that, as the errors in the data and in the blurring operator tend to zero, the resulting minimizers converge to the minimizer of the exact likelihood function. Finally, the practical effectiveness of the approach is demonstrated on synthetically generated data, and a nonnegatively constrained, projected quasiNewton method is introduced.
An Efficient Computational Method for Total VariationPenalized Poisson Likelihood Estimation, accepted
 in Inverse Problems and Imaging
"... Abstract. Approximating nonGaussian noise processes with Gaussian models is standard in data analysis. This is due in large part to the fact that Gaussian models yield parameter estimation problems of least squares form, which have been extensively studied both from the theoretical and computatio ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Approximating nonGaussian noise processes with Gaussian models is standard in data analysis. This is due in large part to the fact that Gaussian models yield parameter estimation problems of least squares form, which have been extensively studied both from the theoretical and computational points of view. In image processing applications, for example, data is often collected by a CCD camera, in which case the noise is a Guassian/Poisson mixture with the Poisson noise dominating for a sufficiently strong signal. Even so, the standard approach in such cases is to use a Gaussian approximation that leads to a negativelog likelihood function of weighted least squares type. In the Bayesian pointofview taken in this paper, a negativelog prior (or regularization) function is added to the negativelog likelihood function, and the resulting function is minimized. We focus on the case where the negativelog prior is the wellknown total variation function and give a statistical interpretation. Regardless of whether the least squares or Poisson negativelog likelihood is used, the total variation term yields a minimization problem that is computationally challenging. The primary result of this work is the efficient computational method that is presented for the solution of such problems, together with its convergence analysis. With the computational method in hand, we then perform experiments that indicate that the Poisson negativelog likelihood yields a more computationally efficient method than does the use of the least squares function. We also present results that indicate that this may even be the case when the data noise is i.i.d. Gaussian, suggesting that irregardless of noise statistics, using the Poisson negativelog likelihood can yield a more computationally tractable problem when total variation regularization is used. 1.
A primaldual activeset method for nonnegativity constrained total variation deblurring problems
 IEEE Trans. Image Process
, 2007
"... Abstract—This paper studies image deblurring problems using a total variationbased model, with a nonnegativity constraint. The addition of the nonnegativity constraint improves the quality of the solutions, but makes the solution process a difficult one. The contribution of our work is a fast and ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
Abstract—This paper studies image deblurring problems using a total variationbased model, with a nonnegativity constraint. The addition of the nonnegativity constraint improves the quality of the solutions, but makes the solution process a difficult one. The contribution of our work is a fast and robust numerical algorithm to solve the nonnegatively constrained problem. To overcome the nondifferentiability of the total variation norm, we formulate the constrained deblurring problem as a primaldual program which is a variant of the formulation proposed by Chan, Golub, and Mulet for unconstrained problems. Here, dual refers to a combination of the Lagrangian and Fenchel duals. To solve the constrained primaldual program, we use a semismooth Newton’s method. We exploit the relationship between the semismooth Newton’s method and the primaldual active set method to achieve considerable simplification of the computations. The main advantages of our proposed scheme are: no parameters need significant adjustment, a standard inverse preconditioner works very well, quadratic rate of local convergence (theoretical and numerical), numerical evidence of global convergence, and high accuracy of solving the optimality system. The scheme shows robustness of performance over a wide range of parameters. A comprehensive set of numerical comparisons are provided against other methods to solve the same problem which show the speed and accuracy advantages of our scheme. 1 Index Terms—Image deblurring, nonnegativity, primaldual activeset, semismooth Newton’s method, total variation.
Simplified statistical image reconstruction algorithm for polyenergetic Xray CT
 In Proc. IEEE Nuc. Sci. Symp. Med. Im. Conf
, 2005
"... would not be possible without the direct and indirect help and support of a lot of people. First and foremost is Prof. Jeff Fessler, whose invaluable guidance and constant support as a research advisor helped me take the first steps in engineering research. JeanBaptiste Thibault, Ph.D., (CT Scienti ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
would not be possible without the direct and indirect help and support of a lot of people. First and foremost is Prof. Jeff Fessler, whose invaluable guidance and constant support as a research advisor helped me take the first steps in engineering research. JeanBaptiste Thibault, Ph.D., (CT Scientist, GE Healthcare) collaborated closely with us on one of the investigatations and provided practical knowledge about actual Xray CT scanners, scanner data, and image reconstruction software libraries. I also wish to acknowledge the insightful discussions I had with Roy Nilsen (GE Healthcare), Bruno De Man, Ph.D., (GE Global Research), and with members of Prof. Randy Ten Haken’s
An optimal subgradient algorithm for largescale convex optimization in simple domains
, 2015
"... This paper shows that the OSGA algorithm – which uses firstorder information to solve convex optimization problems with optimal complexity – can be used to efficiently solve arbitrary boundconstrained convex optimization problems. This is done by constructing an explicit method as well as an inex ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
This paper shows that the OSGA algorithm – which uses firstorder information to solve convex optimization problems with optimal complexity – can be used to efficiently solve arbitrary boundconstrained convex optimization problems. This is done by constructing an explicit method as well as an inexact scheme for solving the boundconstrained rational subproblem required by OSGA. This leads to an efficient implementation of OSGA on largescale problems in applications arising signal and image processing, machine learning and statistics. Numerical experiments demonstrate the promising performance of OSGA on such problems. A software package implementing OSGA for boundconstrained convex problems is available.
Hierarchical regularization for edgepreserving reconstruction of PET images
 Inverse Probl
, 2010
"... AbstractThe data in PET emission and transmission tomography and in low dose Xray tomography, consists of counts of photons originating from random events. The need to model the data as a Poisson process poses a challenge for traditional integral geometrybased reconstruction algorithms. Although ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
AbstractThe data in PET emission and transmission tomography and in low dose Xray tomography, consists of counts of photons originating from random events. The need to model the data as a Poisson process poses a challenge for traditional integral geometrybased reconstruction algorithms. Although qualitative a priori information of the target may be available, it may be difficult to encode it as a regularization functional in a minimization algorithm. This is the case, for example, when the target is known to consist of well defined structures, but how many, and their location, form and size are not specified. Following the Bayesian paradigm, we model the data and the target as random variables, and we account for the qualitative nature of the a priori information by introducing a hierarchical model in which the a priori variance is unknown and therefore part of the estimation problem. We present a numerically effective algorithm for estimating both the target and its prior variance. Computed examples with simulated and real data demonstrate that the algorithm gives good quality reconstruction for both emission and transmission PET problems at very low computational cost.
GAUSSIAN MARKOV RANDOM FIELD PRIORS FOR INVERSE PROBLEMS
"... (Communicated by Jari Kaipio) Abstract. In this paper, our focus is on the connections between the methods of (quadratic) regularization for inverse problems and Gaussian Markov random field (GMRF) priors for problems in spatial statistics. We begin with the most standard GMRFs defined on a uniform ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
(Communicated by Jari Kaipio) Abstract. In this paper, our focus is on the connections between the methods of (quadratic) regularization for inverse problems and Gaussian Markov random field (GMRF) priors for problems in spatial statistics. We begin with the most standard GMRFs defined on a uniform computational grid, which correspond to the oftused discrete negativeLaplacian regularization matrix. Next, we present a class of GMRFs that allow for the formation of edges in reconstructed images, and then draw concrete connections between these GMRFs and numerical discretizations of more general diffusion operators. The benefit of the GMRF interpretation of quadratic regularization is that a GMRF is builtup from concrete statistical assumptions about the values of the unknown at each pixel given the values of its neighbors. Thus the regularization term corresponds to a concrete spatial statistical model for the unknown, encapsulated in the prior. Throughout our discussion, strong ties between specific GMRFs, numerical discretizations of diffusion operators, and corresponding regularization matrices, are established. We then show how such GMRF priors can be used for edgepreserving reconstruction of images, in both image deblurring and medical imaging test cases. Moreover, we demonstrate the effectiveness of GMRF priors for data arising from both Gaussian and Poisson noise models.
An Iterative Method for EdgePreserving MAP Estimation when DataNoise is
 Poisson, accepted in the SIAM Journal on Scientific Computing
"... Abstract. In numerous applications of image processing, e.g. astronomical and medical imaging, datanoise is wellmodeled by a Poisson distribution. This motivates the use of the negativelog Poisson likelihood function for data fitting. (The fact that application scientists in both astronomical and ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Abstract. In numerous applications of image processing, e.g. astronomical and medical imaging, datanoise is wellmodeled by a Poisson distribution. This motivates the use of the negativelog Poisson likelihood function for data fitting. (The fact that application scientists in both astronomical and medical imaging regularly choose this function for data fitting provides further motivation.) However difficulties arise when the negativelog Poisson likelihood is used. Chief among them are the facts that it is nonquadratic and is defined only for vectors with nonnegative values. The nonnegatively constrained, convex optimization problems that arise when the negativelog Poisson likelihood is used are therefore more challenging than when least squares is the fittodata function. Edge preserving deblurring and denoising has long been a problem of keen interest in the image processing community. While total variation regularization is the gold standard for such problems, its use yields computationally intensive optimization problems. This motivates the desire to develop regularization functions that are edge preserving, but are less difficult to use. We present one such regularization function here. This function is quadratic, and can be viewed as the discretization of a diffusion operator with a diffusion function that is approximately 1 in smooth regions of the true image and is less than 1 (but still positive) at or near an edge. Combining the negativelog Poisson likelihood function with this quadratic, edge preserving regularization function yields a strictly convex, nonnegatively constrained optimization problem. A large portion of this paper is dedicated to the presentation of and convergence proof for an algorithm designed for this problem. Finally, we apply the algorithm to synthetically generated data in order to test the methodology.