#### DMCA

## A Bilinear Approach to the Parameter Estimation of a general Heteroscedastic Linear System with Application to Conic Fitting

Citations: | 2 - 1 self |

### Citations

4702 |
Multiple view geometry in computer vision.
- Hartley, Zisserman
- 2000
(Show Context)
Citation Context ...Specifically, r ∑ i= 1 r T W = σ u v (6) 2 i i i r W − W = σ r+ 1 (7) p ∑ j = r + 1 r 2 W − W = σ (8) F From the optimality measured by the Frobenius-norm, the estimate by (6) is also the ML estimate =-=[12, 28, 29]-=-, if the noise in the matrix W is i.i.d. Gaussian. However, the SVD method does not work on an incomplete matrix (with missing data). Moreover, the solution by (6) is not optimal if the noise in W doe... |

428 | Direct least squares fitting of ellipses.
- Fitzgibbon, Pilu, et al.
- 1999
(Show Context)
Citation Context ...neral theory in section 3. With this aim, we mainly compare our approach with other competing approaches to this problem: including FNS [6], HEIV [22, 23], KAN [20, 21] and the constrained TLS method =-=[9]-=-. The method in [9] is a specific implementation of the TLS method [13], for the conic fitting problem, as pointed out in [23], in particular it enforces that the solution is an ellipse. It has been e... |

276 | Parameter estimation techniques: A tutorial with application to conic fitting,”
- Zhang
- 1997
(Show Context)
Citation Context ...full rank, due to noise. Many optimization approaches and their associated objective functions have been proposed to solve this parameter estimation problem, as can be found in a comprehensive survey =-=[33]-=-. Among them, a straightforward solution to (1) is the right singular vector of W, associated with the least singular value. Such a solution is usually called as the TLS estimate [13], because it mini... |

125 | Motion Segmentation with Missing Data using PowerFactorization and GPCA.
- Vidal, Hartley
- 2004
(Show Context)
Citation Context ..., as can be found in [6-8, 22, 23, 25, 26]. Another active research topic is to employ the bilinear approach to calculate the low-rank approximation of a large matrix in some challenging environments =-=[11, 24, 27, 30, 31]-=-, where the traditional SVD [10] does not work or its solution is not optimal. Here, in this paper, we apply the bilinear approach to solve the parameter estimation problem in a general heteroscedasti... |

115 |
Principal component analysis with missing data and its application to polyhedral object modeling
- Shum, Ikeuchi, et al.
- 1995
(Show Context)
Citation Context ..., as can be found in [6-8, 22, 23, 25, 26]. Another active research topic is to employ the bilinear approach to calculate the low-rank approximation of a large matrix in some challenging environments =-=[11, 24, 27, 30, 31]-=-, where the traditional SVD [10] does not work or its solution is not optimal. Here, in this paper, we apply the bilinear approach to solve the parameter estimation problem in a general heteroscedasti... |

94 | Heteroscedastic regression in computer vision: Problems with bilinear constraint,
- Leedan, Meer
- 2000
(Show Context)
Citation Context ...e conic fitting, to validate the correctness of our general theory in section 3. With this aim, we mainly compare our approach with other competing approaches to this problem: including FNS [6], HEIV =-=[22, 23]-=-, KAN [20, 21] and the constrained TLS method [9]. The method in [9] is a specific implementation of the TLS method [13], for the conic fitting problem, as pointed out in [23], in particular it enforc... |

83 | Factorization with uncertainty.
- Irani, Anandan
- 2000
(Show Context)
Citation Context ...between the carriers in w i is first obtained from a linearization process, then, the parameters θ are estimated by minimizing the Mahalanobis distance: where m ∑ i= 1 − i (3) T ( w − w ) C ( w − w ) =-=(4)-=- i io i − C is the pseudo inverse of C and w io is the underlying ground truth of w i . This minimization problem is reduced to a generalized eigenproblem, where the generalized eigenvector, associate... |

77 | Active tracking of foveated feature clusters using affine structure
- Reid, Murray
- 1996
(Show Context)
Citation Context ...Specifically, r ∑ i= 1 r T W = σ u v (6) 2 i i i r W − W = σ r+ 1 (7) p ∑ j = r + 1 r 2 W − W = σ (8) F From the optimality measured by the Frobenius-norm, the estimate by (6) is also the ML estimate =-=[12, 28, 29]-=-, if the noise in the matrix W is i.i.d. Gaussian. However, the SVD method does not work on an incomplete matrix (with missing data). Moreover, the solution by (6) is not optimal if the noise in W doe... |

76 | On the fitting of surfaces to data with covariances,
- Chojnacki, Brooks, et al.
- 2000
(Show Context)
Citation Context ...rmalization method [19-21]. The idea behind this is to approximately equalize the noise in all carriers. Other general approaches to this heteroscedastic problem include HEIV [22, 23, 25, 26] and FNS =-=[6, 8]-=-. In the HEIV model, the covariance matrix C i between the carriers in w i is first obtained from a linearization process, then, the parameters θ are estimated by minimizing the Mahalanobis distance: ... |

70 | A uni¯ed factorization algorithm for points, line segments and planes with uncertain models
- Morris, Kanade
- 1999
(Show Context)
Citation Context ..., as can be found in [6-8, 22, 23, 25, 26]. Another active research topic is to employ the bilinear approach to calculate the low-rank approximation of a large matrix in some challenging environments =-=[11, 24, 27, 30, 31]-=-, where the traditional SVD [10] does not work or its solution is not optimal. Here, in this paper, we apply the bilinear approach to solve the parameter estimation problem in a general heteroscedasti... |

67 |
Linear fitting with missing data: Applications to structure from motion and to characterizing intensity images
- Jacobs
- 1997
(Show Context)
Citation Context ...in (12), or s i in (13), can be separately calculated as the least squares (LS) solution, which minimizes T T T 2 rˆ ′ i = min || S ( r′ i ) − ( w′ i ) || F ri′ sˆ F i F (14) 2 i = min || Rsi − wi || =-=(15)-=- si Note the similarity between (12) and (13), or between (14) and (15). In (12) or (14), each row of R, r′ i , needs to be computed; and similarly, each column of S, s i , needs to be computed in (13... |

59 | A general method for errors-in-variables problems in computer vision.
- Matei, Meer
- 2000
(Show Context)
Citation Context ...view and devised the renormalization method [19-21]. The idea behind this is to approximately equalize the noise in all carriers. Other general approaches to this heteroscedastic problem include HEIV =-=[22, 23, 25, 26]-=- and FNS [6, 8]. In the HEIV model, the covariance matrix C i between the carriers in w i is first obtained from a linearization process, then, the parameters θ are estimated by minimizing the Mahalan... |

57 | Linear fitting with missing data for structure-from-motion
- Jacobs
(Show Context)
Citation Context ... , needs to be computed in (13) or (15). Intrinsically, these two sub-problems are same: to solve a linear system. This way, each sub-step of the iteration is reduced to solving a linear system: Ax=b =-=(16)-=- with the LS solution as: 1 In [31], the bilinear approach is called the PowerFactorization method. − xˆ = A b (17) 2 In the following, a matrix is usually denoted by a bold capital letter, eg W. Its ... |

53 |
The total least squares problem: computational aspects and analysis.
- Huffel
- 1991
(Show Context)
Citation Context ... ...", P. Chen and D. Suter 1 Introduction Parameter estimation in a heteroscedastic system has become an active subject, in order to overcome the difficulties of the total least squares (TLS) method =-=[13]-=-, as can be found in [6-8, 22, 23, 25, 26]. Another active research topic is to employ the bilinear approach to calculate the low-rank approximation of a large matrix in some challenging environments ... |

48 | Recovering the missing components in a large noisy low-rank matrix: Application to sfm.
- Chen, Suter
- 2004
(Show Context)
Citation Context ...ic tool for calculating the low-rank matrix approximation. The principle behind the SVD [10] states that any matrix, into where m m O , U ∈ , n n O , V ∈ and UΣV m n R , W ∈ , can be decomposed T W = =-=(5)-=- p = min( m n m, n Σ = diag{ σ 1 , σ 2 , L, σ p} ∈ R , with , ) and σ 1 ≥ σ 2 ≥ L ≥ σ p ≥ 0 . An important fact [10] is that one can easily construct r W , the closest rank r approximation of W, measu... |

43 | Provably-convergent iterative methods for projective structure from motion.
- Mahamud, Hebert, et al.
- 2001
(Show Context)
Citation Context |

38 |
Statistical Bias Of Conic Fitting And Renormalization
- Kanatani
- 1994
(Show Context)
Citation Context ...g, to validate the correctness of our general theory in section 3. With this aim, we mainly compare our approach with other competing approaches to this problem: including FNS [6], HEIV [22, 23], KAN =-=[20, 21]-=- and the constrained TLS method [9]. The method in [9] is a specific implementation of the TLS method [13], for the conic fitting problem, as pointed out in [23], in particular it enforces that the so... |

31 |
Unbiased estimation and statistical analysis of 3-d rigid motion from two views
- Kanatani
- 1993
(Show Context)
Citation Context ... vec1{ W} − l) ⎡ l1 ⎤ mn T ⎢ ⎥ + uiu i 2 where C = ∑ . The vector ⎢ l l = ⎥ ∈ R i= 1 σ ⎢ ⎥ i M ⎢ ⎥ ⎣l m ⎦ with a rank n-1 matrix l T T T m 2 ⎡l ⎤ 1 ⎢ ⎥ ⎢l L = ⎥ ∈ ⎢ M ⎥ ⎢ ⎥ ⎢⎣ l ⎥⎦ m n R , . mn, 1 in =-=(19)-=-, with (18) (19) n l i ∈ R , is associated 8sMECSE-21-2006: "A Bilinear Approach to the Parameter Estimation of a general ...", P. Chen and D. Suter In plain language, the minimization of the objectio... |

23 | Affine structure and motion from points, lines and conics
- Kahl, Heyden
- 1999
(Show Context)
Citation Context ... l) ⎡ l1 ⎤ mn T ⎢ ⎥ + uiu i 2 where C = ∑ . The vector ⎢ l l = ⎥ ∈ R i= 1 σ ⎢ ⎥ i M ⎢ ⎥ ⎣l m ⎦ with a rank n-1 matrix l T T T m 2 ⎡l ⎤ 1 ⎢ ⎥ ⎢l L = ⎥ ∈ ⎢ M ⎥ ⎢ ⎥ ⎢⎣ l ⎥⎦ m n R , . mn, 1 in (19), with =-=(18)-=- (19) n l i ∈ R , is associated 8sMECSE-21-2006: "A Bilinear Approach to the Parameter Estimation of a general ...", P. Chen and D. Suter In plain language, the minimization of the objection function ... |

21 |
Factorization as a rank 1 problem
- Aguiar, Moura
- 1999
(Show Context)
Citation Context ...aches and their associated objective functions have been proposed to solve this parameter estimation problem, as can be found in a comprehensive survey [33]. Among them, a straightforward solution to =-=(1)-=- is the right singular vector of W, associated with the least singular value. Such a solution is usually called as the TLS estimate [13], because it minimizes the following objective function: (1) (2)... |

20 | From FNS to HEIV: A link between two vision parameter estimation methods
- Chojnacki, Brooks, et al.
- 2004
(Show Context)
Citation Context ...rmalization method [19-21]. The idea behind this is to approximately equalize the noise in all carriers. Other general approaches to this heteroscedastic problem include HEIV [22, 23, 25, 26] and FNS =-=[6, 8]-=-. In the HEIV model, the covariance matrix C i between the carriers in w i is first obtained from a linearization process, then, the parameters θ are estimated by minimizing the Mahalanobis distance: ... |

16 | Revisiting hartley’s normalized eight-point algorithm
- Chojnacki, Brooks, et al.
(Show Context)
Citation Context ...ant fact [10] is that one can easily construct r W , the closest rank r approximation of W, measured by 2-norm or Frobenius-norm, as: Specifically, r ∑ i= 1 r T W = σ u v (6) 2 i i i r W − W = σ r+ 1 =-=(7)-=- p ∑ j = r + 1 r 2 W − W = σ (8) F From the optimality measured by the Frobenius-norm, the estimate by (6) is also the ML estimate [12, 28, 29], if the noise in the matrix W is i.i.d. Gaussian. Howeve... |

16 | Structure and motion from points, lines and conics with affine cameras
- Kahl, Heyden
- 1998
(Show Context)
Citation Context ...This way, each sub-step of the iteration is reduced to solving a linear system: Ax=b (16) with the LS solution as: 1 In [31], the bilinear approach is called the PowerFactorization method. − xˆ = A b =-=(17)-=- 2 In the following, a matrix is usually denoted by a bold capital letter, eg W. Its i th column is denoted by w i and its i th row is denoted by w′ i . 5sMECSE-21-2006: "A Bilinear Approach to the Pa... |

15 |
Estimation with Bilinear Constraints in Computer Vision
- Leedan, Meer
- 1998
(Show Context)
Citation Context ...nality makes the problem challenging to the TLS method. For example, a biased estimate is obtained by the TLS method, if the noisy points come from a segment of the conic, as testified experimentally =-=[22, 23]-=- and proved theoretically [20, 21]. In order to overcome the difficulties, introduced by the non-i.i.d. Gaussianality, Kanatani analyzed this problem from a geometric statistics view and devised the r... |

14 | Rank 1 weighted factorization for 3d structure recovery: algorithms and performance analysis
- Aguiar, Moura
(Show Context)
Citation Context ..., the covariance matrix C i between the carriers in w i is first obtained from a linearization process, then, the parameters θ are estimated by minimizing the Mahalanobis distance: where m ∑ i= 1 − i =-=(3)-=- T ( w − w ) C ( w − w ) (4) i io i − C is the pseudo inverse of C and w io is the underlying ground truth of w i . This minimization problem is reduced to a generalized eigenproblem, where the genera... |

14 | Reduction of bias in maximum likelihood ellipse fitting
- Matei, Meer
- 2000
(Show Context)
Citation Context ...view and devised the renormalization method [19-21]. The idea behind this is to approximately equalize the noise in all carriers. Other general approaches to this heteroscedastic problem include HEIV =-=[22, 23, 25, 26]-=- and FNS [6, 8]. In the HEIV model, the covariance matrix C i between the carriers in w i is first obtained from a linearization process, then, the parameters θ are estimated by minimizing the Mahalan... |

12 | Estimation of rank deficient matrices from partial observations: Two-step iterative algorithms
- Guerreiro, Aguiar
- 2003
(Show Context)
Citation Context |

10 |
Fitting A Second Degree Curve In The Presence Of Error
- Werman, Geyzel
- 1995
(Show Context)
Citation Context ...ing, or in general second-order curve fitting, is to analyze the bias of the estimates. Ideally, the estimates are unbiased, like Kanatani’s renormalization method [20] and Werman and Geyzel’s method =-=[32]-=-, which have been explicitly proved as unbiased. Note that Werman and Geyzel’s method [32] is for io 3sMECSE-21-2006: "A Bilinear Approach to the Parameter Estimation of a general ...", P. Chen and D.... |

1 |
Matrix Computations. 3nd ed
- Golub, Loan
- 1996
(Show Context)
Citation Context ...her active research topic is to employ the bilinear approach to calculate the low-rank approximation of a large matrix in some challenging environments [11, 24, 27, 30, 31], where the traditional SVD =-=[10]-=- does not work or its solution is not optimal. Here, in this paper, we apply the bilinear approach to solve the parameter estimation problem in a general heteroscedastic environment. First, we review ... |

1 |
Statistical Optimi zation for Geometric Computation: Theory and Practice
- Kanatani
- 1996
(Show Context)
Citation Context ...ng to the TLS method. For example, a biased estimate is obtained by the TLS method, if the noisy points come from a segment of the conic, as testified experimentally [22, 23] and proved theoretically =-=[20, 21]-=-. In order to overcome the difficulties, introduced by the non-i.i.d. Gaussianality, Kanatani analyzed this problem from a geometric statistics view and devised the renormalization method [19-21]. The... |