Results 1  10
of
18
Estimation of nonlinear errorsinvariables models for computer vision applications
 IEEE Trans. Patt. Anal. Mach. Intell
, 2006
"... Abstract—In an errorsinvariables (EIV) model, all the measurements are corrupted by noise. The class of EIV models with constraints separable into the product of two nonlinear functions, one solely in the variables and one solely in the parameters, is general enough to represent most computer visi ..."
Abstract

Cited by 33 (6 self)
 Add to MetaCart
(Show Context)
Abstract—In an errorsinvariables (EIV) model, all the measurements are corrupted by noise. The class of EIV models with constraints separable into the product of two nonlinear functions, one solely in the variables and one solely in the parameters, is general enough to represent most computer vision problems. We show that the estimation of such nonlinear EIV models can be reduced to iteratively estimating a linear model having point dependent, i.e., heteroscedastic, noise process. Particular cases of the proposed heteroscedastic errorsinvariables (HEIV) estimator are related to other techniques described in the vision literature: the Sampson method, renormalization, and the fundamental numerical scheme. In a wide variety of tasks, the HEIV estimator exhibits the same, or superior, performance as these techniques and has a weaker dependence on the quality of the initial solution than the LevenbergMarquardt method, the standard approach toward estimating nonlinear models. Index Terms—Nonlinear least squares, heteroscedastic regression, camera calibration, 3D rigid motion, uncalibrated vision. 1 MODELING COMPUTER VISION PROBLEMS SOLVING most computer vision problems requires the estimation of a set of parameters from noisy measurements using a statistical model. A statistical model provides a mathematical description of a problem in terms of a constraint equation relating the measurements to the
Y.: Performance evaluation of iterative geometric fitting algorithms
 Comput. Stat. Data Anal
, 2007
"... The convergence performance of typical numerical schemes for geometric fitting for computer vision applications is compared. First, the problem and the associated KCR lower bound are stated. Then, three well known fitting algorithms are described: FNS, HEIV, and renormalization. To these, we add a ..."
Abstract

Cited by 14 (11 self)
 Add to MetaCart
The convergence performance of typical numerical schemes for geometric fitting for computer vision applications is compared. First, the problem and the associated KCR lower bound are stated. Then, three well known fitting algorithms are described: FNS, HEIV, and renormalization. To these, we add a special variant of GaussNewton iterations. For initialization of iterations, random choice, least squares, and Taubin’s method are tested. Simulation is conducted for fundamental matrix computation and ellipse fitting, which reveals different characteristics of each method. c°2007 Published by Elsevier B.V. All rights reserved.
High accuracy fundamental matrix computation and its performance evaluation
 Proc. 17th British Machine Vision Conf (BMVC 2006), vol.1
, 2006
"... We compare the convergence performance of different numerical schemes for computing the fundamental matrix from point correspondences over two images. First, we state the problem and the associated KCR lower bound. Then, we describe the algorithms of three wellknown methods: FNS, HEIV, and renormal ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
(Show Context)
We compare the convergence performance of different numerical schemes for computing the fundamental matrix from point correspondences over two images. First, we state the problem and the associated KCR lower bound. Then, we describe the algorithms of three wellknown methods: FNS, HEIV, and renormalization, to which we add GaussNewton iterations. For initial values, we test random choice, least squares, and Taubin’s method. Experiments using simulated and real images reveal different characteristics of each method. Overall, FNS exhibits the best convergence performance. 1
High accuracy computation of rankconstrained fundamental matrix by efficient search
 Proc. 10th Meeting Image Recog. Understand. (MIRU2007
, 2007
"... A new method is presented for computing the fundamental matrix from point correspondences: its singular value decomposition (SVD) is optimized by the LevenbergMarquard (LM) method. The search is initialized by optimal correction of unconstrained ML. There is no need for tentative 3D reconstruction ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
(Show Context)
A new method is presented for computing the fundamental matrix from point correspondences: its singular value decomposition (SVD) is optimized by the LevenbergMarquard (LM) method. The search is initialized by optimal correction of unconstrained ML. There is no need for tentative 3D reconstruction. The accuracy achieves the theoretical bound (the KCR lower bound). 1
Extended FNS for constrained parameter estimation
 In: Proc. 10th Meeting Image Recog. Understand
, 2007
"... Abstract We present a new method, called “EFNS ” (“extended FNS”), for linearizable constrained maximum likelihood estimation. This complements the CFNS of Chojnacki et al. and is a true extension of the FNS of Chojnacki et al. to an arbitrary number of intrinsic constraints. Computing the fundament ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
Abstract We present a new method, called “EFNS ” (“extended FNS”), for linearizable constrained maximum likelihood estimation. This complements the CFNS of Chojnacki et al. and is a true extension of the FNS of Chojnacki et al. to an arbitrary number of intrinsic constraints. Computing the fundamental matrix as an illustration, we demonstrate that CFNS does not necessarily converge to a correct solution, while EFNS converges to an optimal value which nearly satisfies the theoretical accuracy bound (KCR lower bound).
Compact fundamental matrix computation
 Proc. 3rd Pacific Rim Symp. Image and Video Technology
, 2009
"... Abstract. A very compact algorithm is presented for fundamental matrix computation from point correspondences over two images. The computation is based on the strict maximum likelihood (ML) principle, minimizing the reprojection error. The rank constraint is incorporated by the EFNS procedure. Altho ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract. A very compact algorithm is presented for fundamental matrix computation from point correspondences over two images. The computation is based on the strict maximum likelihood (ML) principle, minimizing the reprojection error. The rank constraint is incorporated by the EFNS procedure. Although our algorithm produces the same solution as all existing MLbased methods, it is probably the most practical of all, being small and simple. By numerical experiments, we confirm that our algorithm behaves as expected. 1
Highest Accuracy Fundamental Matrix Computation
"... Abstract. We compare algorithms for fundamental matrix computation, which we classify into “a posteriori correction”, “internal access”, and “external access”. Doing experimental comparison, we show that the 7parameter LevenbergMarquardt (LM) search and the extended FNS (EFNS) exhibit the best per ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We compare algorithms for fundamental matrix computation, which we classify into “a posteriori correction”, “internal access”, and “external access”. Doing experimental comparison, we show that the 7parameter LevenbergMarquardt (LM) search and the extended FNS (EFNS) exhibit the best performance and that additional bundle adjustment does not increase the accuracy to any noticeable degree. 1
Y.: Small algorithm for fundamental matrix computation
 In: Proc. Meeting Image Recognition and Understanding
, 2008
"... Abstract A very small algorithm is presented for computing the fundamental matrix from point correspondences over two images. The computation is based on the strict maximum likelihood (ML) principle, minimizing the reprojection error. The rank constraint is incorporated by the EFNS procedure. Althou ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract A very small algorithm is presented for computing the fundamental matrix from point correspondences over two images. The computation is based on the strict maximum likelihood (ML) principle, minimizing the reprojection error. The rank constraint is incorporated by the EFNS procedure. Although our algorithm produces the same solution as all existing MLbased methods, it is probably the smallest of all. By numerical experiments, we confirm that our algorithm behaves as expected.
Fundamental Matrix Computation: Theory and Practice
, 2007
"... We classify and review existing algorithms for computing the fundamental matrix from point correspondences and propose new effective schemes: 7parameter LevenbergMarquardt (LM) search, EFNS, and EFNSbased bundle adjustment. Doing experimental comparison, we show that EFNS and the 7parameter LM s ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We classify and review existing algorithms for computing the fundamental matrix from point correspondences and propose new effective schemes: 7parameter LevenbergMarquardt (LM) search, EFNS, and EFNSbased bundle adjustment. Doing experimental comparison, we show that EFNS and the 7parameter LM search exhibit the best performance and that additional bundle adjustment does not increase the accuracy to any noticeable degree. 1.
Fundamental Matrix Computation: Theory and Practice
"... We classify and review existing algorithms for computing the fundamental matrix from point correspondences and propose new effective schemes: 7parameter LevenbergMarquardt (LM) search, extended FNS, and EFNSbased bundle adjustment. Doing experimental comparison, we show that EFNS and the 7parame ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We classify and review existing algorithms for computing the fundamental matrix from point correspondences and propose new effective schemes: 7parameter LevenbergMarquardt (LM) search, extended FNS, and EFNSbased bundle adjustment. Doing experimental comparison, we show that EFNS and the 7parameter LM search exhibit the best performance and that additional bundle adjustment does not increase the accuracy to any noticeable degree. 1.