Results 1  10
of
213
A generalized Gaussian image model for edgepreserving MAP estimation
 IEEE Trans. on Image Processing
, 1993
"... Absfrucf We present a Markov random field model which allows realistic edge modeling while providing stable maximum a posteriori MAP solutions. The proposed model, which we refer to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distri ..."
Abstract

Cited by 300 (37 self)
 Add to MetaCart
(Show Context)
Absfrucf We present a Markov random field model which allows realistic edge modeling while providing stable maximum a posteriori MAP solutions. The proposed model, which we refer to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distribution used in robust detection and estimation. The model satisifies several desirable analytical and computational properties for MAP estimation, including continuous dependence of the estimate on the data, invariance of the character of solutions to scaling of data, and a solution which lies at the unique global minimum of the U posteriori loglikeihood function. The GGMRF is demonstrated to be useful for image reconstruction in lowdosage transmission tomography. I.
On the Unification Line Processes, Outlier Rejection, and Robust Statistics with Applications in Early Vision
, 1996
"... The modeling of spatial discontinuities for problems such as surface recovery, segmentation, image reconstruction, and optical flow has been intensely studied in computer vision. While "lineprocess" models of discontinuities have received a great deal of attention, there has been recent ..."
Abstract

Cited by 271 (8 self)
 Add to MetaCart
The modeling of spatial discontinuities for problems such as surface recovery, segmentation, image reconstruction, and optical flow has been intensely studied in computer vision. While "lineprocess" models of discontinuities have received a great deal of attention, there has been recent interest in the use of robust statistical techniques to account for discontinuities. This paper unifies the two approaches. To achieve this we generalize the notion of a "line process" to that of an analog "outlier process" and show how a problem formulated in terms of outlier processes can be viewed in terms of robust statistics. We also characterize a class of robust statistical problems for which an equivalent outlierprocess formulation exists and give a straightforward method for converting a robust estimation problem into an outlierprocess formulation. We show how prior assumptions about the spatial structure of outliers can be expressed as constraints on the recovered analog outlier processes and how traditional continuation methods can be extended to the explicit outlierprocess formulation. These results indicate that the outlierprocess approach provides a general framework which subsumes the traditional lineprocess approaches as well as a wide class of robust estimation problems. Examples in surface reconstruction, image segmentation, and optical flow are presented to illustrate the use of outlier processes and to show how the relationship between outlier processes and robust statistics can be exploited. An appendix provides a catalog of common robust error norms and their equivalent outlierprocess formulations.
Bayesian Modeling of Uncertainty in LowLevel Vision
, 1990
"... The need for error modeling, multisensor fusion, and robust algorithms i becoming increasingly recognized in computer vision. Bayesian modeling is a powerful, practical, and general framework for meeting these requirements. This article develops a Bayesian model for describing and manipulating the d ..."
Abstract

Cited by 204 (17 self)
 Add to MetaCart
The need for error modeling, multisensor fusion, and robust algorithms i becoming increasingly recognized in computer vision. Bayesian modeling is a powerful, practical, and general framework for meeting these requirements. This article develops a Bayesian model for describing and manipulating the dense fields, such as depth maps, associated with lowlevel computer vision. Our model consists of three components: a prior model, a sensor model, and a posterior model. The prior model captures a priori information about he structure of the field. We construct this model using the smoothness constraints from regularization to define a Markov Random Field. The sensor model describes the behavior and noise characteristics of our measurement system. We develop a number of sensor models for both sparse and dense measurements. The posterior model combines the information from the prior and sensor models using Bayes ' rule. We show how to compute optimal estimates from the posterior model and also how to compute the uncertainty (variance) in these estimates. To demonstrate the utility of our Bayesian framework, we present three examples of its application to real vision problems. The first application is the online extraction of depth from motion. Using a twodimensional generalization of the Kalman filter, we develop an incremental algorithm that provides a dense online estimate of depth whose accuracy improves over time. In the second application, we use a Bayesian model to determine observer motion from sparse depth (range) measurements. In the third application, we use the Bayesian interpretation f regularization to choose the optimal smoothing parameter for interpolation. The uncertainty modeling techniques that we develop, and the utility of these techniques invarious applications, support our claim that Bayesian modeling is a powerful and practical framework for lowlevel vision.
Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion
 International Journal of Computer Vision
, 1997
"... This paper explores the use of local parametrized models of image motion for recovering and recognizing the nonrigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space an ..."
Abstract

Cited by 192 (11 self)
 Add to MetaCart
This paper explores the use of local parametrized models of image motion for recovering and recognizing the nonrigid and articulated motion of human faces. Parametric flow models (for example affine) are popular for estimating motion in rigid scenes. We observe that within local regions in space and time, such models not only accurately model nonrigid facial motions but also provide a concise description of the motion in terms of a small number of parameters. These parameters are intuitively related to the motion of facial features during facial expressions and we show how expressions such as anger, happiness, surprise, fear, disgust, and sadness can be recognized from the local parametric motions in the presence of significant head motion. The motion tracking and expression recognition approach performed with high accuracy in extensive laboratory experiments involving 40 subjects as well as in television and movie sequences.
A Framework for Robust Subspace Learning
 International Journal of Computer Vision
, 2003
"... Many computer vision, signal processing and statistical problems can be posed as problems of learning low dimensional linear or multilinear models. These models have been widely used for the representation of shape, appearance, motion, etc, in computer vision applications. ..."
Abstract

Cited by 177 (10 self)
 Add to MetaCart
(Show Context)
Many computer vision, signal processing and statistical problems can be posed as problems of learning low dimensional linear or multilinear models. These models have been widely used for the representation of shape, appearance, motion, etc, in computer vision applications.
Robust Principal Component Analysis for Computer Vision
, 2001
"... Principal Component Analysis (PCA) has been widely used for the representation of shape, appearance, and motion. One drawback of typical PCA methods is that they are least squares estimation techniques and hence fail to account for "outliers" which are common in realistic training sets. In ..."
Abstract

Cited by 133 (3 self)
 Add to MetaCart
Principal Component Analysis (PCA) has been widely used for the representation of shape, appearance, and motion. One drawback of typical PCA methods is that they are least squares estimation techniques and hence fail to account for "outliers" which are common in realistic training sets. In computer vision applications, outliers typically occur within a sample (image) due to pixels that are corrupted by noise, alignment errors, or occlusion. We review previous approaches for making PCA robust to outliers and present a new method that uses an intrasample outlier process to account for pixel outliers. We develop the theory of Robust Principal Component Analysis (RPCA) and describe a robust Mestimation algorithm for learning linear multivariate representations of high dimensional data such as images. Quantitative comparisons with traditional PCA and previous robust algorithms illustrate the benefits of RPCA when outliers are present. Details of the algorithm are described and a software implementation is being made publically available.
Penalized Weighted LeastSquares Image Reconstruction for Positron Emission Tomography
 IEEE TR. MED. IM
, 1994
"... This paper presents an image reconstruction method for positronemission tomography (PET) based on a penalized, weighted leastsquares (PWLS) objective. For PET measurements that are precorrected for accidental coincidences, we argue statistically that a leastsquares objective function is as approp ..."
Abstract

Cited by 117 (43 self)
 Add to MetaCart
This paper presents an image reconstruction method for positronemission tomography (PET) based on a penalized, weighted leastsquares (PWLS) objective. For PET measurements that are precorrected for accidental coincidences, we argue statistically that a leastsquares objective function is as appropriate, if not more so, than the popular Poisson likelihood objective. We propose a simple databased method for determining the weights that accounts for attenuation and detector efficiency. A nonnegative successive overrelaxation (+SOR) algorithm converges rapidly to the global minimum of the PWLS objective. Quantitative simulation results demonstrate that the bias/variance tradeoff of the PWLS+SOR method is comparable to the maximumlikelihood expectationmaximization (MLEM) method (but with fewer iterations), and is improved relative to the conventional filtered backprojection (FBP) method. Qualitative results suggest that the streak artifacts common to the FBP method are nearly eliminat...
Dense Estimation and ObjectBased Segmentation of the Optical Flow with Robust Techniques
, 1998
"... In this paper we address the issue of recovering and segmenting the apparent velocity field in sequences of images. As for motion estimation, we minimize an objective function involving two robust terms. The first one cautiously captures the optical flow constraint, while the second (a priori) term ..."
Abstract

Cited by 113 (20 self)
 Add to MetaCart
In this paper we address the issue of recovering and segmenting the apparent velocity field in sequences of images. As for motion estimation, we minimize an objective function involving two robust terms. The first one cautiously captures the optical flow constraint, while the second (a priori) term incorporates a discontinuitypreserving smoothness constraint. To cope with the nonconvex minimization problem thus defined, we design an efficient deterministic multigrid procedure. It converges fast toward estimates of good quality, while revealing the large discontinuity structures of flow fields. We then propose an extension of the model by attaching to it a flexible objectbased segmentation device based on deformable closed curves (different families of curve equipped with different kinds of prior can be easily supported). Experimental results on synthetic and natural sequences are presented, including an analysis of sensitivity to parameter tuning. INdex Terms Closed segmenting cu...
MINIMIZERS OF COSTFUNCTIONS INVOLVING NONSMOOTH DATAFIDELITY TERMS. APPLICATION TO THE PROCESSING OF OUTLIERS
, 2002
"... We present a theoretical study of the recovery of an unknown vector x ∈ Rp (such as a signal or an image) from noisy data y ∈ Rq by minimizing with respect to x a regularized costfunction F(x, y) = Ψ(x, y) + αΦ(x), where Ψ is a datafidelity term, Φ is a smooth regularization term, and α> 0 i ..."
Abstract

Cited by 105 (19 self)
 Add to MetaCart
We present a theoretical study of the recovery of an unknown vector x ∈ Rp (such as a signal or an image) from noisy data y ∈ Rq by minimizing with respect to x a regularized costfunction F(x, y) = Ψ(x, y) + αΦ(x), where Ψ is a datafidelity term, Φ is a smooth regularization term, and α> 0 is a parameter. Typically, Ψ(x, y) = ‖Ax − y‖2, where A is a linear operator. The datafidelity terms Ψ involved in regularized costfunctions are generally smooth functions; only a few papers make an exception to this and they consider restricted situations. Nonsmooth datafidelity terms are avoided in image processing. In spite of this, we consider both smooth and nonsmooth datafidelity terms. Our goal is to capture essential features exhibited by the local minimizers of regularized costfunctions in relation to the smoothness of the datafidelity term. In order to fix the context of our study, we consider Ψ(x, y) = i ψ(aTi x − yi), where aTi are the rows of A and ψ is Cm on R \ {0}. We show that if ψ′(0−) < ψ′(0+), then typical data y give rise to local minimizers x ̂ of F(., y) which fit exactly a certain number of the data entries: there is a possibly large set h ̂ of indexes such that aTi x ̂ = yi for every i ∈ ĥ. In contrast, if ψ is
Penalized MaximumLikelihood Image Reconstruction using SpaceAlternating Generalized EM Algorithms
 IEEE Tr. Im. Proc
, 1995
"... Most expectationmaximization (EM) type algorithms for penalized maximumlikelihood image reconstruction converge slowly, particularly when one incorporates additive background effects such as scatter, random coincidences, dark current, or cosmic radiation. In addition, regularizing smoothness penal ..."
Abstract

Cited by 101 (30 self)
 Add to MetaCart
(Show Context)
Most expectationmaximization (EM) type algorithms for penalized maximumlikelihood image reconstruction converge slowly, particularly when one incorporates additive background effects such as scatter, random coincidences, dark current, or cosmic radiation. In addition, regularizing smoothness penalties (or priors) introduce parameter coupling, rendering intractable the Msteps of most EMtype algorithms. This paper presents spacealternating generalized EM (SAGE) algorithms for image reconstruction, which update the parameters sequentially using a sequence of small "hidden" data spaces, rather than simultaneously using one large completedata space. The sequential update decouples the Mstep, so the maximization can typically be performed analytically. We introduce new hiddendata spaces that are less informative than the conventional completedata space for Poisson data and that yield significant improvements in convergence rate. This acceleration is due to statistical considerations, not numerical overrelaxation methods, so monotonic increases in the objective function are guaranteed. We provide a general global convergence proof for SAGE methods with nonnegativity constraints.