Results 1  10
of
20
Point Matching under Large Image Deformations and Illumination Changes
 IEEE TRANS. PATTERN ANAL. MACHINE INTELL
, 2004
"... To solve the general point correspondence problem in which the underlying transformation between image patches is represented by a homography, a solution based on extensive use of first order differential techniques is proposed. We integrate in a single robust Mestimation framework the traditiona ..."
Abstract

Cited by 51 (8 self)
 Add to MetaCart
To solve the general point correspondence problem in which the underlying transformation between image patches is represented by a homography, a solution based on extensive use of first order differential techniques is proposed. We integrate in a single robust Mestimation framework the traditional optical flow method and matching of local color distributions. These distributions are computed with spatially oriented kernels in the 5D joint spatial/color space. The estimation process is initiated at the third level of a Gaussian pyramid, uses only local information, and the illumination changes between the two images are also taken into account. Subpixel
Estimation of nonlinear errorsinvariables models for computer vision applications
 IEEE Trans. Patt. Anal. Mach. Intell
, 2006
"... Abstract—In an errorsinvariables (EIV) model, all the measurements are corrupted by noise. The class of EIV models with constraints separable into the product of two nonlinear functions, one solely in the variables and one solely in the parameters, is general enough to represent most computer visi ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
(Show Context)
Abstract—In an errorsinvariables (EIV) model, all the measurements are corrupted by noise. The class of EIV models with constraints separable into the product of two nonlinear functions, one solely in the variables and one solely in the parameters, is general enough to represent most computer vision problems. We show that the estimation of such nonlinear EIV models can be reduced to iteratively estimating a linear model having point dependent, i.e., heteroscedastic, noise process. Particular cases of the proposed heteroscedastic errorsinvariables (HEIV) estimator are related to other techniques described in the vision literature: the Sampson method, renormalization, and the fundamental numerical scheme. In a wide variety of tasks, the HEIV estimator exhibits the same, or superior, performance as these techniques and has a weaker dependence on the quality of the initial solution than the LevenbergMarquardt method, the standard approach toward estimating nonlinear models. Index Terms—Nonlinear least squares, heteroscedastic regression, camera calibration, 3D rigid motion, uncalibrated vision. 1 MODELING COMPUTER VISION PROBLEMS SOLVING most computer vision problems requires the estimation of a set of parameters from noisy measurements using a statistical model. A statistical model provides a mathematical description of a problem in terms of a constraint equation relating the measurements to the
Statistical optimization for geometric fitting: Theoretical accuracy analysis and high order error analysis
 Int. J. Comput. Vis
, 2008
"... A rigorous accuracy analysis is given to various techniques for estimating parameters of geometric models from noisy data for computer vision applications. First, it is pointed out that parameter estimation for vision applications is very different in nature from traditional statistical analysis and ..."
Abstract

Cited by 18 (12 self)
 Add to MetaCart
(Show Context)
A rigorous accuracy analysis is given to various techniques for estimating parameters of geometric models from noisy data for computer vision applications. First, it is pointed out that parameter estimation for vision applications is very different in nature from traditional statistical analysis and hence a different mathematical framework is necessary in such a domain. After general theories on estimation and accuracy are given, typical existing techniques are selected, and their accuracy is evaluated up to higher order terms. This leads to a “hyperaccurate ” method that outperforms existing methods. 1.
Further improving geometric fitting
 Proc. 5th Int. Conf. 3D Digital Imaging and Modeling
, 2005
"... We give a formal definition of geometric fitting in a way that suits computer vision applications. We point out that the performance of geometric fitting should be evaluated in the limit of small noise rather than in the limit of a large number of data as recommended in the statistical literature. T ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
We give a formal definition of geometric fitting in a way that suits computer vision applications. We point out that the performance of geometric fitting should be evaluated in the limit of small noise rather than in the limit of a large number of data as recommended in the statistical literature. Taking the KCR lower bound as an optimality requirement and focusing on the linearized constraint case, we compare the accuracy of Kanatani’s renormalization with maximum likelihood (ML) approaches including the FNS of Chojnacki et al. and the HEIV of Leedan and Meer. Our analysis reveals the existence of a method superior to all these. 1.
Uncertainty Modeling and Geometric Inference
, 2004
"... We investigate the meaning of “statistical methods ” for geometric inference based on image feature points. Tracing back the origin of feature uncertainty to image processing operations, we discuss the implications of asymptotic analysis in reference to “geometric fitting ” and “geometric model sele ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
We investigate the meaning of “statistical methods ” for geometric inference based on image feature points. Tracing back the origin of feature uncertainty to image processing operations, we discuss the implications of asymptotic analysis in reference to “geometric fitting ” and “geometric model selection”. We point out that a correspondence exists between the standard statistical analysis and the geometric inference problem. We also compare the capability of the “geometric AIC ” and the “geometric MDL ” in detecting degeneracy. Next, we review recent progress in geometric fitting techniques for linear constraints, describing the “FNS method”, the “HEIV method”, the “renormalization method”, and other related techniques. Finally, we discuss the “NeymanScott problem” and “semiparametric models ” in relation to geometric inference. We conclude that applications of statistical methods requires careful considerations about the nature of the problem in question. 1.
A Bilinear Approach to the Parameter Estimation of a general Heteroscedastic Linear System with Application to Conic Fitting
"... A bilinear approach to the parameter estimation of a general heteroscedastic linear system, with application to conic fitting ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
A bilinear approach to the parameter estimation of a general heteroscedastic linear system, with application to conic fitting
Hyperaccuracy for geometric fitting
 4th Int. Workshop Total Least Squares and ErrorsinVariables Modelling
, 2006
"... A rigorous accuracy analysis is given to various techniques for estimating parameters of geometric models from noisy data. It is first pointed out that parameter estimation for computer vision applications is very different in nature from traditional statistical analysis and that a different mathema ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
A rigorous accuracy analysis is given to various techniques for estimating parameters of geometric models from noisy data. It is first pointed out that parameter estimation for computer vision applications is very different in nature from traditional statistical analysis and that a different mathematical framework is necessary in such a domain. After general theories on estimation and accuracy are given, typical existing techniques are selected, and their accuracy is evaluated up to higher order terms. This leads to a “hyperaccurate” method that outperforms existing methods. 1.
Vision
 Res
, 1976
"... a b s t r a c t Objective: We used ERPmeasures to investigate how attentional filtering requirements affect preparatory attentional control and spatially selective visual processing. Methods: In a spatial cueing experiment, attentional filtering demands were manipulated by presenting taskrelevant v ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
a b s t r a c t Objective: We used ERPmeasures to investigate how attentional filtering requirements affect preparatory attentional control and spatially selective visual processing. Methods: In a spatial cueing experiment, attentional filtering demands were manipulated by presenting taskrelevant visual stimuli either in isolation (targetonly task) or together with irrelevant adjacent distractors (targetplusdistractors task). ERPs were recorded in response to informative spatial precues, and in response to subsequent visual stimuli at attended and unattended locations. Results: The preparatory ADAN component elicited during the cuetarget interval was larger and more sustained in the targetplusdistractors task, reflecting the demand of stronger attentional filtering. By contrast, two other preparatory lateralised components (EDAN and LDAP) were unaffected by the attentional filtering demand. Similar enhancements of P1 and N1 components in response to the lateral imperative visual stimuli were observed at cued versus uncued locations, regardless of filtering demand, whereas later attentionalrelated negativities beyond 200 ms poststimulus were larger the targetplusdistractor task. Conclusions: Our results implicate that the ADAN component is linked to preparatory topdown control processes involved in the attentional filtering of irrelevant distractors; such filtering also affects later attentionrelated negativities recorded after the onset of the imperative stimulus. Significance: ERPs can reveal effects of expected attentional filtering of irrelevant distractors on preparatory attentional control processes and spatially selective visual processing.
When are simple LS estimators enough? An empirical study
 of LS, TLS and GTLS”, IJCV
, 2005
"... Abstract. A variety of leastsquares estimators of significantly different complexity and generality are available to solve overconstrained linear systems. The most theoretically general may not necessarily be the best choice in practice; problem conditions may be such that simpler and faster algor ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. A variety of leastsquares estimators of significantly different complexity and generality are available to solve overconstrained linear systems. The most theoretically general may not necessarily be the best choice in practice; problem conditions may be such that simpler and faster algorithms, if theoretically inferior, would yield acceptable errors. We investigate when this may happen using homography estimation as the reference problem. We study the errors of LS, TLS, equilibrated TLS and GTLS algorithms with different noise types and varying intensity and correlation levels. To allow direct comparisons with algorithms from the applied mathematics and computer vision communities, we consider both inhomogeneous and homogeneous systems. We add noise to image coordinates and system matrix entries in separate experiments, to take into account the effect on noise properties (heteroscedasticity) of preprocessing data transformations. We find that the theoretically most general algorithms may not always be worth their higher complexity; comparable results are obtained with moderate levels of noise intensity and correlation. We identify such levels quantitatively for the reference problem, thus suggesting when simpler algorithms can be applied with limited errors in spite of their restrictive assumptions.