Results 1  10
of
28
Y.: Performance evaluation of iterative geometric fitting algorithms
 Comput. Stat. Data Anal
, 2007
"... The convergence performance of typical numerical schemes for geometric fitting for computer vision applications is compared. First, the problem and the associated KCR lower bound are stated. Then, three well known fitting algorithms are described: FNS, HEIV, and renormalization. To these, we add a ..."
Abstract

Cited by 14 (11 self)
 Add to MetaCart
(Show Context)
The convergence performance of typical numerical schemes for geometric fitting for computer vision applications is compared. First, the problem and the associated KCR lower bound are stated. Then, three well known fitting algorithms are described: FNS, HEIV, and renormalization. To these, we add a special variant of GaussNewton iterations. For initialization of iterations, random choice, least squares, and Taubin’s method are tested. Simulation is conducted for fundamental matrix computation and ellipse fitting, which reveals different characteristics of each method. c°2007 Published by Elsevier B.V. All rights reserved.
Statistical optimization for geometric fitting: Theoretical accuracy analysis and high order error analysis
 Int. J. Comput. Vis
, 2008
"... A rigorous accuracy analysis is given to various techniques for estimating parameters of geometric models from noisy data for computer vision applications. First, it is pointed out that parameter estimation for vision applications is very different in nature from traditional statistical analysis and ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
(Show Context)
A rigorous accuracy analysis is given to various techniques for estimating parameters of geometric models from noisy data for computer vision applications. First, it is pointed out that parameter estimation for vision applications is very different in nature from traditional statistical analysis and hence a different mathematical framework is necessary in such a domain. After general theories on estimation and accuracy are given, typical existing techniques are selected, and their accuracy is evaluated up to higher order terms. This leads to a “hyperaccurate ” method that outperforms existing methods. 1.
High accuracy fundamental matrix computation and its performance evaluation
 Proc. 17th British Machine Vision Conf (BMVC 2006), vol.1
, 2006
"... We compare the convergence performance of different numerical schemes for computing the fundamental matrix from point correspondences over two images. First, we state the problem and the associated KCR lower bound. Then, we describe the algorithms of three wellknown methods: FNS, HEIV, and renormal ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
(Show Context)
We compare the convergence performance of different numerical schemes for computing the fundamental matrix from point correspondences over two images. First, we state the problem and the associated KCR lower bound. Then, we describe the algorithms of three wellknown methods: FNS, HEIV, and renormalization, to which we add GaussNewton iterations. For initial values, we test random choice, least squares, and Taubin’s method. Experiments using simulated and real images reveal different characteristics of each method. Overall, FNS exhibits the best convergence performance. 1
N.: Error analysis for circle fitting algorithms
 Electronic Journal of Statistics
"... We study the problem of fitting circles (or circular arcs) to data points observed with errors in both variables. A detailed error analysis for all popular circle fitting methods – geometric fit, K˚asa fit, Pratt fit, and Taubin fit – is presented. Our error analysis goes deeper than the traditional ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
We study the problem of fitting circles (or circular arcs) to data points observed with errors in both variables. A detailed error analysis for all popular circle fitting methods – geometric fit, K˚asa fit, Pratt fit, and Taubin fit – is presented. Our error analysis goes deeper than the traditional expansion to the leading order. We obtain higher order terms, which show exactly why and by how much circle fits differ from each other. Our analysis allows us to construct a new algebraic (noniterative) circle fitting algorithm that outperforms all the existing methods, including the (previously regarded as unbeatable) geometric fit.
High accuracy computation of rankconstrained fundamental matrix by efficient search
 Proc. 10th Meeting Image Recog. Understand. (MIRU2007
, 2007
"... A new method is presented for computing the fundamental matrix from point correspondences: its singular value decomposition (SVD) is optimized by the LevenbergMarquard (LM) method. The search is initialized by optimal correction of unconstrained ML. There is no need for tentative 3D reconstruction ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
A new method is presented for computing the fundamental matrix from point correspondences: its singular value decomposition (SVD) is optimized by the LevenbergMarquard (LM) method. The search is initialized by optimal correction of unconstrained ML. There is no need for tentative 3D reconstruction. The accuracy achieves the theoretical bound (the KCR lower bound). 1
Ellipse Fitting with Hyperaccuracy
"... Abstract. For fitting an ellipse to a point sequence, ML (maximum likelihood) has been regarded as having the highest accuracy. In this paper, we demonstrate the existence of a “hyperaccurate ” method which outperforms ML. This is made possible by error analysis of ML followed by subtraction of high ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
(Show Context)
Abstract. For fitting an ellipse to a point sequence, ML (maximum likelihood) has been regarded as having the highest accuracy. In this paper, we demonstrate the existence of a “hyperaccurate ” method which outperforms ML. This is made possible by error analysis of ML followed by subtraction of highorder bias terms. Since ML nearly achieves the theoretical accuracy bound (the KCR lower bound), the resulting improvement is very small. Nevertheless, our analysis has theoretical significance, illuminating the relationship between ML and the KCR lower bound. 1
Extended FNS for constrained parameter estimation
 In: Proc. 10th Meeting Image Recog. Understand
, 2007
"... Abstract We present a new method, called “EFNS ” (“extended FNS”), for linearizable constrained maximum likelihood estimation. This complements the CFNS of Chojnacki et al. and is a true extension of the FNS of Chojnacki et al. to an arbitrary number of intrinsic constraints. Computing the fundament ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
(Show Context)
Abstract We present a new method, called “EFNS ” (“extended FNS”), for linearizable constrained maximum likelihood estimation. This complements the CFNS of Chojnacki et al. and is a true extension of the FNS of Chojnacki et al. to an arbitrary number of intrinsic constraints. Computing the fundamental matrix as an illustration, we demonstrate that CFNS does not necessarily converge to a correct solution, while EFNS converges to an optimal value which nearly satisfies the theoretical accuracy bound (KCR lower bound).
Compact fundamental matrix computation
 Proc. 3rd Pacific Rim Symp. Image and Video Technology
, 2009
"... Abstract. A very compact algorithm is presented for fundamental matrix computation from point correspondences over two images. The computation is based on the strict maximum likelihood (ML) principle, minimizing the reprojection error. The rank constraint is incorporated by the EFNS procedure. Altho ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
(Show Context)
Abstract. A very compact algorithm is presented for fundamental matrix computation from point correspondences over two images. The computation is based on the strict maximum likelihood (ML) principle, minimizing the reprojection error. The rank constraint is incorporated by the EFNS procedure. Although our algorithm produces the same solution as all existing MLbased methods, it is probably the most practical of all, being small and simple. By numerical experiments, we confirm that our algorithm behaves as expected. 1
Compact Algorithm for Strictly ML Ellipse Fitting
"... A very compact algorithm is presented for fitting an ellipse to points in images by maximum likelihood (ML) in the strict sense. Although our algorithm produces the same solution as existing MLbased methods, it is probably the simplest and the smallest of all. By numerical experiments, we show that ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
A very compact algorithm is presented for fitting an ellipse to points in images by maximum likelihood (ML) in the strict sense. Although our algorithm produces the same solution as existing MLbased methods, it is probably the simplest and the smallest of all. By numerical experiments, we show that the strict ML solution practically coincides with the Sampson solution. 1.
Unified Computation of Strict Maximum Likelihood for Geometric Fitting
"... A new numerical scheme is presented for strictly computing maximum likelihood (ML) of geometric fitting problems. Intensively studied in the past are those methods that first transform the data into a computationally convenient form and then assume Gaussian noise in the transformed space. In contras ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
A new numerical scheme is presented for strictly computing maximum likelihood (ML) of geometric fitting problems. Intensively studied in the past are those methods that first transform the data into a computationally convenient form and then assume Gaussian noise in the transformed space. In contrast, our method assumes Gaussian noise in the original data space. It is shown that the strict ML solution can be computed by iteratively using existing methods. Then, our method is applied to ellipse fitting and fundamental matrix computation. Our method is also shown to encompasses optimal correction, computing, e.g., perpendiculars to an ellipse and triangulating stereo images. While such applications have been studied individually, our method generalizes them into an application independent form from a unified point of view. 1.