Results 1  10
of
93
On the fitting of surfaces to data with covariances
 IEEE Trans. Patt. Anal. Mach. Intell
, 2000
"... AbstractÐWe consider the problem of estimating parameters of a model described by an equation of special form. Specific models arise in the analysis of a wide class of computer vision problems, including conic fitting and estimation of the fundamental matrix. We assume that noisy data are accompanie ..."
Abstract

Cited by 75 (19 self)
 Add to MetaCart
(Show Context)
AbstractÐWe consider the problem of estimating parameters of a model described by an equation of special form. Specific models arise in the analysis of a wide class of computer vision problems, including conic fitting and estimation of the fundamental matrix. We assume that noisy data are accompanied by (known) covariance matrices characterizing the uncertainty of the measurements. A cost function is first obtained by considering a maximumlikelihood formulation and applying certain necessary approximations that render the problem tractable. A novel, Newtonlike iterative scheme is then generated for determining a minimizer of the cost function. Unlike alternative approaches such as Sampson's method or the renormalization technique, the new scheme has as its theoretical limit the minimizer of the cost function. Furthermore, the scheme is simply expressed, efficient, and unsurpassed as a general technique in our testing. An important feature of the method is that it can serve as a basis for conducting theoretical comparison of various estimation approaches.
An Information Fusion Framework for Robust Shape Tracking
, 2005
"... Existing methods for incorporating subspace model constraints in shape tracking use only partial information from the measurements and model distribution. We propose a unified framework for robust shape tracking, optimally fusing heteroscedastic uncertainties or noise from measurement, system dynam ..."
Abstract

Cited by 52 (9 self)
 Add to MetaCart
(Show Context)
Existing methods for incorporating subspace model constraints in shape tracking use only partial information from the measurements and model distribution. We propose a unified framework for robust shape tracking, optimally fusing heteroscedastic uncertainties or noise from measurement, system dynamics, and a subspace model. The resulting nonorthogonal subspace projection and fusion are natural extensions of the traditional model constraint using orthogonal projection. We present two motion measurement algorithms and introduce alternative solutions for measurement uncertainty estimation. We build shape models offline from training data and exploit information from the ground truth initialization online through a strong model adaptation. Our framework is applied for tracking in echocardiograms where the motion estimation errors are heteroscedastic in nature, each heart has a distinct shape, and the relative motions of epicardial and endocardial borders reveal crucial diagnostic features. The proposed method significantly outperforms the existing shapespaceconstrained tracking algorithm. Due to the complete treatment of heteroscedastic uncertainties, the strong model adaptation, and the coupled tracking of doublecontours, robust performance is observed even on the most challenging cases.
Robust Regression with Projection Based Mestimators
 In International Conference on Computer Vision
, 2003
"... The robust regression techniques in the RANSAC family are popular today in computer vision, but their performance depends on a user supplied threshold. We eliminate this drawback of RANSAC by reformulating another robust method, the Mestimator, as a projection pursuit optimization problem. The proj ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
(Show Context)
The robust regression techniques in the RANSAC family are popular today in computer vision, but their performance depends on a user supplied threshold. We eliminate this drawback of RANSAC by reformulating another robust method, the Mestimator, as a projection pursuit optimization problem. The projection based pbMestimator automatically derives the threshold from univariate kernel density estimates. Nevertheless, the performance of the pbMestimator equals or exceeds that of RANSAC techniques tuned to the optimal threshold, a value which is never available in practice. Experiments were performed both with synthetic and real data in the affine motion and fundamental matrix estimation tasks.
Estimation of nonlinear errorsinvariables models for computer vision applications
 IEEE Trans. Patt. Anal. Mach. Intell
, 2006
"... Abstract—In an errorsinvariables (EIV) model, all the measurements are corrupted by noise. The class of EIV models with constraints separable into the product of two nonlinear functions, one solely in the variables and one solely in the parameters, is general enough to represent most computer visi ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
(Show Context)
Abstract—In an errorsinvariables (EIV) model, all the measurements are corrupted by noise. The class of EIV models with constraints separable into the product of two nonlinear functions, one solely in the variables and one solely in the parameters, is general enough to represent most computer vision problems. We show that the estimation of such nonlinear EIV models can be reduced to iteratively estimating a linear model having point dependent, i.e., heteroscedastic, noise process. Particular cases of the proposed heteroscedastic errorsinvariables (HEIV) estimator are related to other techniques described in the vision literature: the Sampson method, renormalization, and the fundamental numerical scheme. In a wide variety of tasks, the HEIV estimator exhibits the same, or superior, performance as these techniques and has a weaker dependence on the quality of the initial solution than the LevenbergMarquardt method, the standard approach toward estimating nonlinear models. Index Terms—Nonlinear least squares, heteroscedastic regression, camera calibration, 3D rigid motion, uncalibrated vision. 1 MODELING COMPUTER VISION PROBLEMS SOLVING most computer vision problems requires the estimation of a set of parameters from noisy measurements using a statistical model. A statistical model provides a mathematical description of a problem in terms of a constraint equation relating the measurements to the
Statistical efficiency of curve fitting algorithms
 Computational Statistics and Data Analysis
, 2004
"... We study the problem of fitting parametrized curves to noisy data. Under certain assumptions (known as Cartesian and radial functional models), we derive asymptotic expressions for the bias and the covariance matrix of the parameter estimates. We also extend Kanatani’s version of the CramerRao lowe ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
(Show Context)
We study the problem of fitting parametrized curves to noisy data. Under certain assumptions (known as Cartesian and radial functional models), we derive asymptotic expressions for the bias and the covariance matrix of the parameter estimates. We also extend Kanatani’s version of the CramerRao lower bound, which he proved for unbiased estimates only, to more general estimates that include many popular algorithms (most notably, the orthogonal least squares and algebraic fits). We then show that the gradientweighted algebraic fit is statistically efficient and describe all other statistically efficient algebraic fits.
Robust Regression for Data with Multiple Structures
 In 2001 IEEE Conference on Computer Vision and Pattern Recognition, volume I
, 2001
"... In many vision problems (e.g., stereo, motion) multiple structures can occur in the data, in which case several instances of the same model need to be recovered from a single data set. However, once the measurement noise becomes significantly large relative to the separation between the structures, ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
(Show Context)
In many vision problems (e.g., stereo, motion) multiple structures can occur in the data, in which case several instances of the same model need to be recovered from a single data set. However, once the measurement noise becomes significantly large relative to the separation between the structures, the robust statistical methods commonly used in the vision community tend to fail. In this paper, we show that all these techniques are special cases of the general class of Mestimators with auxiliary scale, and explain their failure in the presence of noisy multiple structures. To be able to cope with data containing multiple structures the techniques innate to vision (Hough and RANSAC) should be combined with the robust methods customary in statistics. The implications of our analysis are illustrated by introducing a simple procedure for 2D multistructured data problematic for all known current techniques. 1.
From FNS to HEIV: A Link between Two Vision Parameter Estimation Methods
 IEEE Trans. Pattern Anal. Mach. Intell
, 2004
"... Abstract — Problems requiring accurate determination of parameters from imagebased quantities arise often in computer vision. Two recent, independently developed frameworks for estimating such parameters are the FNS and HEIV schemes. Here, it is shown that FNS and a core version of HEIV are essenti ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
Abstract — Problems requiring accurate determination of parameters from imagebased quantities arise often in computer vision. Two recent, independently developed frameworks for estimating such parameters are the FNS and HEIV schemes. Here, it is shown that FNS and a core version of HEIV are essentially equivalent, solving a common underlying equation via different means. The analysis is driven by the search for a nondegenerate form of a certain generalized eigenvalue problem, and effectively leads to a new derivation of the relevant case of the HEIV algorithm. This work may be seen as an extension of previous efforts to rationalize and interrelate a spectrum of estimators, including the renormalization method of Kanatani and the normalized eightpoint method of Hartley. Index Terms — Statistical methods, maximum likelihood, (un)constrained minimization, fundamental matrix, epipolar equation I.
A New Constrained Parameter Estimator For Computer Vision Applications
"... Previous work of the authors developed a theoretically wellfounded scheme (FNS) for finding the minimiser of a class of cost functions. Various problems in video analysis, stereo vision, ellipsefitting, etc, may be expressed in terms of finding such a minimiser. However, in common with many other ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
Previous work of the authors developed a theoretically wellfounded scheme (FNS) for finding the minimiser of a class of cost functions. Various problems in video analysis, stereo vision, ellipsefitting, etc, may be expressed in terms of finding such a minimiser. However, in common with many other approaches, it is necessary to correct the minimiser as a postprocess if an ancillary constraint is also to be satisfied. In this paper we develop the first integrated scheme (CFNS) for simultaneously minimising the cost function and satisfying the constraint. Preliminary experiments in the domain of fundamentalmatrix estimation show that CFNS generates rank2 estimates with smaller cost function values than rank2 corrected FNS estimates. Furthermore, when compared with the HartleyZisserman Gold Standard method, CFNS is seen to generate results of comparable quality in a fraction of the time.
Automatic Detection Of Circular Objects By Ellipse Growing
 International Journal of Image and Graphics
, 2004
"... We present a new method for the automatic detection of circular objects in images: we detect an osculating circle to an elliptic arc using a Hough transform, iteratively deforming it into an ellipse, removing outlier pixels, and searching for a separate edge. The voting space for the Hough transform ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
We present a new method for the automatic detection of circular objects in images: we detect an osculating circle to an elliptic arc using a Hough transform, iteratively deforming it into an ellipse, removing outlier pixels, and searching for a separate edge. The voting space for the Hough transform is restricted to one and two dimensions for efficiency, and special weighting schemes are introduced to enhance the accuracy. We demonstrate the effectiveness of our method using real images. Finally, we apply our method to the calibration of a turntable for 3D object shape reconstruction.
Revisiting Hartley's Normalized EightPoint Algorithm
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 2003
"... Abstract — Hartley’s eightpoint algorithm has maintained an important place in computer vision, notably as a means of providing an initial value of the fundamental matrix for use in iterative estimation methods. In this paper, a novel explanation is given for the improvement in performance of the e ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
(Show Context)
Abstract — Hartley’s eightpoint algorithm has maintained an important place in computer vision, notably as a means of providing an initial value of the fundamental matrix for use in iterative estimation methods. In this paper, a novel explanation is given for the improvement in performance of the eightpoint algorithm that results from using normalized data. It is first established that the normalized algorithm acts to minimize a specific cost function. It is then shown that this cost function is statistically better founded than the cost function associated with the nonnormalized algorithm. This augments the original argument that improved performance is due to the better conditioning of a pivotal matrix. Experimental results are given that support the adopted approach. This work continues a wider effort to place a variety of estimation techniques within a coherent framework. Index Terms — Epipolar equation, fundamental matrix, eightpoint algorithm, data normalization