Results 1  10
of
15
From FNS to HEIV: A Link between Two Vision Parameter Estimation Methods
 IEEE Trans. Pattern Anal. Mach. Intell
, 2004
"... Abstract — Problems requiring accurate determination of parameters from imagebased quantities arise often in computer vision. Two recent, independently developed frameworks for estimating such parameters are the FNS and HEIV schemes. Here, it is shown that FNS and a core version of HEIV are essenti ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
Abstract — Problems requiring accurate determination of parameters from imagebased quantities arise often in computer vision. Two recent, independently developed frameworks for estimating such parameters are the FNS and HEIV schemes. Here, it is shown that FNS and a core version of HEIV are essentially equivalent, solving a common underlying equation via different means. The analysis is driven by the search for a nondegenerate form of a certain generalized eigenvalue problem, and effectively leads to a new derivation of the relevant case of the HEIV algorithm. This work may be seen as an extension of previous efforts to rationalize and interrelate a spectrum of estimators, including the renormalization method of Kanatani and the normalized eightpoint method of Hartley. Index Terms — Statistical methods, maximum likelihood, (un)constrained minimization, fundamental matrix, epipolar equation I.
Influence of numerical conditioning on the accuracy of relative orientation
 IEEE Conf. on Computer Vision and Pattern Recognition
, 2007
"... We study the influence of numerical conditioning on the accuracy of two closedform solutions to the overconstrained relative orientation problem. We consider the well known eightpoint algorithm and the recent fivepoint algorithm, and evaluate changes in their performance due to Hartley’s normaliz ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We study the influence of numerical conditioning on the accuracy of two closedform solutions to the overconstrained relative orientation problem. We consider the well known eightpoint algorithm and the recent fivepoint algorithm, and evaluate changes in their performance due to Hartley’s normalization and Muehlich’s equilibration. The need for numerical conditioning is introduced by explaining the known occurence of the bias of the eightpoint algorithm towards the forward motion. Then it is shown how conditioning can be used to improve the results of the recent fivepoint algorithm. This is not straightforward since the conditioning disturbs the calibration of the input data. The conditioning therefore needs to be reverted before enforcing the internal cubic constraints of the essential matrix. The obtained improvements are less dramatic than in the case of the eightpoint algorithm, for which we offer a plausible explanation. The theoretical claims are backed up with extensive experimentation on noisy artificial datasets, under a variety of geometric and imaging parameters. 1.
A practical rankconstrained eightpoint algorithm for fundamental matrix estimation
 In Computer Vision and Pattern Recognition, 2013. CVPR 2013. IEEE Conference on
, 2013
"... Due to its simplicity, the eightpoint algorithm has been widely used in fundamental matrix estimation. Unfortunately, the rank2 constraint of a fundamental matrix is enforced via a posterior rank correction step, thus leading to nonoptimal solutions to the original problem. To address this dra ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Due to its simplicity, the eightpoint algorithm has been widely used in fundamental matrix estimation. Unfortunately, the rank2 constraint of a fundamental matrix is enforced via a posterior rank correction step, thus leading to nonoptimal solutions to the original problem. To address this drawback, existing algorithms need to solve either a very high order polynomial or a sequence of convex relaxation problems, both of which are computationally ineffective and numerically unstable. In this work, we present a new rank2 constrained eightpoint algorithm, which directly incorporates the rank2 constraint in the minimization process. To avoid singularities, we propose to solve seven subproblems and retrieve their globally optimal solutions by using tailored polynomial system solvers. Our proposed method is noniterative, computationally efficient and numerically stable. Experiment results have verified its superiority over existing algebraic error based algorithms in terms of accuracy, as well as its advantages when used to initialize geometric error based algorithms. 1.
RankConstrained Fundamental Matrix Estimation by Polynomial Global Optimization Versus the EightPoint Algorithm
, 2012
"... The fundamental matrix can be estimated from point matches. The current gold standard is to bootstrap the eightpoint algorithm and twoview projective bundle adjustment. The eightpoint algorithm first computes a simple linear least squares solution by minimizing an algebraic cost and then computes ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The fundamental matrix can be estimated from point matches. The current gold standard is to bootstrap the eightpoint algorithm and twoview projective bundle adjustment. The eightpoint algorithm first computes a simple linear least squares solution by minimizing an algebraic cost and then computes the closest rankdeficient matrix. This article proposes a singlestep method that solves both steps of the eightpoint algorithm. Using recent result from polynomial global optimization, our method finds the rankdeficient matrix that exactly minimizes the algebraic cost. The current gold standard is known to be extremely effective but is nonetheless outperformed by our rankconstrained method boostrapping bundle adjustment. This is here demonstrated on simulated and standard real datasets. With our initialization, bundle adjustment consistently finds a better local minimum (achieves a lower reprojection error) and takes less iterations to converge.
A Bilinear Approach to the Parameter Estimation of a general Heteroscedastic Linear System with Application to Conic Fitting
"... A bilinear approach to the parameter estimation of a general heteroscedastic linear system, with application to conic fitting ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
A bilinear approach to the parameter estimation of a general heteroscedastic linear system, with application to conic fitting
Computation of homographies
 In British Machine Vision Conference, Electronic Proceedings
, 2005
"... A new method for the noniterative computation of a homography matrix is described. Rearrangement of the equations leads to a block partitioned sparse matrix, facilitating a residualization based on orthogonal matrix projections. This improves the handling of the error structure of the linear system ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
A new method for the noniterative computation of a homography matrix is described. Rearrangement of the equations leads to a block partitioned sparse matrix, facilitating a residualization based on orthogonal matrix projections. This improves the handling of the error structure of the linear system of equations. The vanishing line is treated as the principal component in the estimation process. This estimate is more robust, since the position of the vanishing line depends only on the relative position and orientation of the camera to the observed plane, and is invariant to the structure of the points observed on the plane. A flopcount indicates that the new method is 11 times faster for four point correspondences, and converges to a factor of 5 for a large number of points. Furthermore, a new noniterative method of treating error in both images is derived. Combining the forward H and reverse G projections in a suitable manner eliminates the systematic bias of the estimation, and the first order error: a strict bound on the error reduction is derived. This can be achieved faster than a classical DLT due to the improved numerical efficiency. Results of MonteCarlo simulations are presented to verify the performance. 1
A Consistency Result for the Normalized EightPoint Algorithm
, 2007
"... A recently proposed argument to explain the improved performance of the eightpoint algorithm that results from using normalized data [IEEE Trans. Pattern Anal. Mach. Intell., 25(9):1172–1177, 2003] relies upon adoption of a certain model for statistical data distribution. Under this model, the cost ..."
Abstract
 Add to MetaCart
(Show Context)
A recently proposed argument to explain the improved performance of the eightpoint algorithm that results from using normalized data [IEEE Trans. Pattern Anal. Mach. Intell., 25(9):1172–1177, 2003] relies upon adoption of a certain model for statistical data distribution. Under this model, the cost function that underlies the algorithm operating on the normalized data is statistically more advantageous than the cost function that underpins the algorithm using unnormalized data. Here we extend this explanation by introducing a more refined, structured model for data distribution. Under the extended model, the normalized eightpoint algorithm turns out to be approximately consistent in a statistical sense. The proposed extension provides a link between the existing statistical rationalization of the normalized eightpoint algorithm and the approach of Mühlich and Mester for enhancing total least squares estimation methods via equilibration. Our contribution forms part of a wider effort to rationalize and interrelate foundational methods in vision parameter estimation.
A Convex Optimization Approach to Robust Fundamental Matrix Estimation ∗
"... This paper considers the problem of estimating the fundamental matrix from corrupted point correspondences. A general nonconvex framework is proposed that explicitly takes into account the rank2 constraint on the fundamental matrix and the presence of noise and outliers. The main result of the pap ..."
Abstract
 Add to MetaCart
(Show Context)
This paper considers the problem of estimating the fundamental matrix from corrupted point correspondences. A general nonconvex framework is proposed that explicitly takes into account the rank2 constraint on the fundamental matrix and the presence of noise and outliers. The main result of the paper shows that this nonconvex problem can be solved by solving a sequence of convex semidefinite programs, obtained by exploiting a combination of polynomial optimization tools and rank minimization techniques. Further, the algorithm can be easily extended to handle the case where only some of the correspondences are labeled, and, to exploit coocurrence information, if available. Consistent experiments show that the proposed method works well, even in scenarios characterized by a very high percentage of outliers. 1.
MICHOT et al.: ALGEBRAIC LINE SEARCH FOR BUNDLE ADJUSTMENT 1 Algebraic Line Search for Bundle Adjustment
"... Bundle Adjustment is based on nonlinear least squares minimization techniques, such as LevenbergMarquardt and GaussNewton. It iteratively computes local parameter increments. Line Search techniques aim at providing an efficient magnitude for these increments, called the step length. In this paper, ..."
Abstract
 Add to MetaCart
(Show Context)
Bundle Adjustment is based on nonlinear least squares minimization techniques, such as LevenbergMarquardt and GaussNewton. It iteratively computes local parameter increments. Line Search techniques aim at providing an efficient magnitude for these increments, called the step length. In this paper, a new ad hoc Line Search technique for solving bundle adjustment is proposed. The main idea is to determine an efficient step length using an approximation of the cost function based on an algebraic distance. We use the Wolfe conditions to show that our Line Search preserves the convergence properties of the original algorithm. Our method is compared to different nonlinear optimization algorithms and Line Search techniques under several conditions, on real and synthetic data. The method improves the minimization process, decreasing the reprojection error significantly faster than the other techniques. 1
Direct and Specific Fitting of Conics to Scattered Data
"... A new method to fit specific types of conics to scattered data points is introduced. Direct, specific fitting of ellipses and hyperbolæ is achieved by imposing a quadratic constraint on the conic coefficients, whereby an improved partitioning of the design matrix is devised so as to improve computat ..."
Abstract
 Add to MetaCart
(Show Context)
A new method to fit specific types of conics to scattered data points is introduced. Direct, specific fitting of ellipses and hyperbolæ is achieved by imposing a quadratic constraint on the conic coefficients, whereby an improved partitioning of the design matrix is devised so as to improve computational efficiency and numerical stability by eliminating redundant aspects of the fitting procedure. Fitting of parabolas is achieved by determining an orthogonal basis vector set in the Grassmannian space of quadratic conic forms. The linear combination of the basis vectors which fulfills the parabolic condition and has a minimum residual is determined using Lagrange multipliers. 1