Results 1  10
of
13
Nonparametric Local Transforms for Computing Visual Correspondence
, 1994
"... Abstract. We propose a new approach to the correspondence problem that makes use of nonparametric local transforms as the basis for correlation. Nonparametric local transforms rely on the relative ordering of local intensity values, and not on the intensity values themselves. Correlation using suc ..."
Abstract

Cited by 317 (8 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a new approach to the correspondence problem that makes use of nonparametric local transforms as the basis for correlation. Nonparametric local transforms rely on the relative ordering of local intensity values, and not on the intensity values themselves. Correlation using such transforms can tolerate a signi cant number of outliers. This can result in improved performance near object boundaries when compared with conventional methods such as normalized correlation. We introduce two nonparametric local transforms: the rank transform, which measures local intensity, and the census transform, which summarizes local image structure. We describe some properties of these transforms, and demonstrate their utility on both synthetic and real data. 1
The Computation of Optical Flow
, 1995
"... Twodimensional image motion is the projection of the threedimensional motion of objects, relative to a visual sensor, onto its image plane. Sequences of timeordered images allow the estimation of projected twodimensional image motion as either instantaneous image velocities or discrete image dis ..."
Abstract

Cited by 292 (10 self)
 Add to MetaCart
(Show Context)
Twodimensional image motion is the projection of the threedimensional motion of objects, relative to a visual sensor, onto its image plane. Sequences of timeordered images allow the estimation of projected twodimensional image motion as either instantaneous image velocities or discrete image displacements. These are usually called the optical flow field or the image velocity field. Provided that optical flow is a reliable approximation to twodimensional image motion, it may then be used to recover the threedimensional motion of the visual sensor (to within a scale factor) and the threedimensional surface structure (shape or relative depth) through assumptions concerning the structure of the optical flow field, the threedimensional environment and the motion of the sensor. Optical flow may also be used to perform motion detection, object segmentation, timetocollision and focus of expansion calculations, motion compensated encoding and stereo disparity measurement. We investiga...
Segmenting foreground objects from a dynamic textured background via a robust kalman filter
 in IEEE Proceedings of the International Conference on Computer Vision
, 2003
"... The algorithm presented in this paper aims to segment the foreground objects in video (e.g., people) given timevarying, textured backgrounds. Examples of timevarying backgrounds include waves on water, clouds moving, trees waving in the wind, automobile trafc, moving crowds, escalators, etc. We ha ..."
Abstract

Cited by 101 (0 self)
 Add to MetaCart
(Show Context)
The algorithm presented in this paper aims to segment the foreground objects in video (e.g., people) given timevarying, textured backgrounds. Examples of timevarying backgrounds include waves on water, clouds moving, trees waving in the wind, automobile trafc, moving crowds, escalators, etc. We have developed a novel foregroundbackground segmentation algorithm that explicitly accounts for the nonstationary nature and clutterlike appearance of many dynamic textures. The dynamic texture is modeled by an Autoregressive Moving Average Model (ARMA). A robust Kalman lter algorithm iteratively estimates the intrinsic appearance of the dynamic texture, as well as the regions of the foreground objects. Preliminary experiments with this method have demonstrated promising results. 1
Animat Vision: Active Vision in Artificial Animals
, 1997
"... this paper [4] and it is further developed in [5]. ..."
Abstract

Cited by 68 (12 self)
 Add to MetaCart
this paper [4] and it is further developed in [5].
Recursive NonLinear Estimation of Discontinuous Flow Fields
 In Third European Conference on Computer Vision
, 1994
"... This paper defines a temporal continuity constraint that expresses assumptions about the evolution of 2D image velocity, or optical flow, over a sequence of images. Temporal continuity is exploited to develop an incremental minimization framework that extends the minimization of a nonconvex objecti ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
This paper defines a temporal continuity constraint that expresses assumptions about the evolution of 2D image velocity, or optical flow, over a sequence of images. Temporal continuity is exploited to develop an incremental minimization framework that extends the minimization of a nonconvex objective function over time. Within this framework this paper describes an incremental continuation method for recursive nonlinear estimation that robustly and adaptively recovers optical flow with motion discontinuities over an image sequence.
Learning to estimate scenes from images
 Adv. Neural Information Processing Systems 11
, 1999
"... We seek the scene interpretation that best explains image data. ..."
Abstract

Cited by 40 (7 self)
 Add to MetaCart
(Show Context)
We seek the scene interpretation that best explains image data.
Visual Motion Analysis by Probabilistic Propagation of Conditional Density
, 1998
"... This thesis establishes a stochastic framework for tracking curves in visual clutter, using a Bayesian randomsampling algorithm. The approach is rooted in ideas from statistics, control theory and computer vision. The problem is to track outlines and features of foreground objects, modelled as curv ..."
Abstract

Cited by 36 (0 self)
 Add to MetaCart
This thesis establishes a stochastic framework for tracking curves in visual clutter, using a Bayesian randomsampling algorithm. The approach is rooted in ideas from statistics, control theory and computer vision. The problem is to track outlines and features of foreground objects, modelled as curves, as they move in substantial clutter, and to do it at, or close to, video framerate. The algorithm, named Condensation, for Conditional density propagation, has recently been derived independently by several researchers, and is generating signi cant interest in the statistics and signal processing communities. This thesis contributes to the literature on Condensationlike lters by presenting some novel applications of and extensions to the basic algorithm, and contributes to the visual motion estimation literature by demonstrating high tracking performance in cluttered environments. Despite its power the Condensation algorithm has a remarkably simple form and this allows the use of nonlinear motion models which combine characteristics of discrete Hidden Markov Models with the continuous AutoRegressive Process motion models traditionally used in Kalman lters. These mixed discretecontinuous models have promising applications to the emerging eld of perception of action. This thesis also implements two algorithms to smooth the output of the Condensation lter which improves the accuracy of motion estimation in a batchmode procedure after tracking is complete.
Visual Motion Estimation based on Motion Blur Interpretation
, 1995
"... When the relative velocity between the different objects in a scene and the camera is relative large  compared with the camera's exposure time  in the resulting image we have a distortion called motion blur. In the past, a lot of algorithms have been proposed for estimating the relative vel ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
When the relative velocity between the different objects in a scene and the camera is relative large  compared with the camera's exposure time  in the resulting image we have a distortion called motion blur. In the past, a lot of algorithms have been proposed for estimating the relative velocity from one or, most of the time, more images. The motion blur is generally considered an extra source of noise and is eliminated, or is assumed nonexistent. Unlike most of these approaches, it is feasible to estimate the Optical Flow map using only the information encoded in the motion blur. This thesis presents an algorithm that estimates the velocity vector of an image patch using the motion blur only, in two steps. The information used for the estimation of the velocity vectors is extracted from the frequency domain, and the most computationally expensive operation is the Fast Fourier Transform that transforms the image from the spatial to the frequency domain. Consequently, the complexity...
A Bayesian Framework for 3D . . .
, 2001
"... We develop an evidencecombining framework for extracting locally consistent di!erential structure from curved surfaces. Existing approaches are restricted by their sequential multistage philosophy, since important information concerning the salient features of surfaces may be discarded as necessar ..."
Abstract
 Add to MetaCart
We develop an evidencecombining framework for extracting locally consistent di!erential structure from curved surfaces. Existing approaches are restricted by their sequential multistage philosophy, since important information concerning the salient features of surfaces may be discarded as necessarily condensed information is passed from stage to stage. Furthermore, since data representations are invariably unaccompanied by any index of evidential significance, the scope for subsequently refining them is limited. One way of attaching evidential support is to propagate covariances through the processing chain. However, severe problems arise in the presence of data nonlinearities, such as outliers or discontinuities. If linear processing techniques are employed covariances may be readily computed, but will be unreliable. On the other hand, if more powerful nonlinear processing techniques are applied, there are severe technical problems in computing the covariances themselves. We sidestep this dilemma by decoupling the identification of nonlinearities in the data from the fitting process itself. If outliers and discontinuities are accurately identified and excluded, then simple, linear processing techniques are effective for the fit, and reliable covariance estimates can be readily obtained. Furthermore, decoupling permits nonlinearity estimation to be cast within a powerful evidence combining framework in which both surface parameters and refined differential structure come to bear simultaneously. This effectively abandons the multistage processing philosophy. Our investigation is firmly grounded as a global MAP estimate within a Bayesian framework. Our ideas are applicable to volumetric data. For simplicity, we choose to demonstrate their eff!ectiveness on
Bayesian Color Constancy, the Maximum Local Mass . . .
, 1995
"... Computational vision algorithms are often developed in a Bayesian framework. Two estimators are commonly used: maximum a posteriori (MAP), and minimum mean squared error (MMSE). We argue that neither is appropriate for perception problems. The MAP estimator makes insufficient use of structure in the ..."
Abstract
 Add to MetaCart
Computational vision algorithms are often developed in a Bayesian framework. Two estimators are commonly used: maximum a posteriori (MAP), and minimum mean squared error (MMSE). We argue that neither is appropriate for perception problems. The MAP estimator makes insufficient use of structure in the posterior probability. The squared error penalty of the MMSE estimator does not reflect typical penalties. We apply this new estimator to color constancy. An unknown illuminant falls on surfaces of unknown colors. We seek to estimate both the illuminant spectrum and the surface spectra from photosensor responses which depend on the product of these unknown spectra. In simulations, we show that the MLM method performs better than the MAP estimator, and better than two standard color constancy algorithms. The MLM estimate may prove useful in other vision problems as well.