Results 1 - 10
of
640
Intrinsic statistics on Riemannian manifolds: Basic tools for geometric measurements
, 1999
"... Measurements of geometric primitives, such as rotations or rigid transformations, are often noisy and we need to use statistics either to reduce the uncertainty or to compare measurements. Unfortunately, geometric primitives often belong to manifolds and not vector spaces. We have already shown [9] ..."
Abstract
-
Cited by 202 (24 self)
- Add to MetaCart
(Show Context)
Measurements of geometric primitives, such as rotations or rigid transformations, are often noisy and we need to use statistics either to reduce the uncertainty or to compare measurements. Unfortunately, geometric primitives often belong to manifolds and not vector spaces. We have already shown [9] that generalizing too quickly even simple statistical notions could lead to paradoxes. In this article, we develop some basic probabilistic tools to work on Riemannian manifolds: the notion of mean value, covariance matrix, normal law, Mahalanobis distance and χ² test. We also present an efficient algorithm to compute the mean value and tractable approximations of the normal and χ² laws for small variances.
Matrix completion from a few entries
"... Let M be a random nα × n matrix of rank r ≪ n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E | = O(r n) observed entries with relative root mean square error RMSE ≤ C(α) ..."
Abstract
-
Cited by 196 (9 self)
- Add to MetaCart
Let M be a random nα × n matrix of rank r ≪ n, and assume that a uniformly random subset E of its entries is observed. We describe an efficient algorithm that reconstructs M from |E | = O(r n) observed entries with relative root mean square error RMSE ≤ C(α)
Toward the Optimal Preconditioned Eigensolver: Locally Optimal Block Preconditioned Conjugate Gradient Method
- SIAM J. Sci. Comput
, 2001
"... We describe new algorithms of the locally optimal block preconditioned conjugate gradient (LOBPCG) method for symmetric eigenvalue problems, based on a local optimization of a three-term recurrence, and suggest several other new methods. To be able to compare numerically different methods in the cla ..."
Abstract
-
Cited by 139 (19 self)
- Add to MetaCart
(Show Context)
We describe new algorithms of the locally optimal block preconditioned conjugate gradient (LOBPCG) method for symmetric eigenvalue problems, based on a local optimization of a three-term recurrence, and suggest several other new methods. To be able to compare numerically different methods in the class, with different preconditioners, we propose a common system of model tests, using random preconditioners and initial guesses. As the "ideal" control algorithm, we advocate the standard preconditioned conjugate gradient method for finding an eigenvector as an element of the null-space of the corresponding homogeneous system of linear equations under the assumption that the eigenvalue is known. We recommend that every new preconditioned eigensolver be compared with this "ideal" algorithm on our model test problems in terms of the speed of convergence, costs of every iteration, and memory requirements. We provide such comparison for our LOBPCG method. Numerical results establish that our algorithm is practically as efficient as the "ideal" algorithm when the same preconditioner is used in both methods. We also show numerically that the LOBPCG method provides approximations to first eigenpairs of about the same quality as those by the much more expensive global optimization method on the same generalized block Krylov subspace. We propose a new version of block Davidson's method as a generalization of the LOBPCG method. Finally, direct numerical comparisons with the Jacobi-Davidson method show that our method is more robust and converges almost two times faster.
Matrix completion from noisy entries
- Journal of Machine Learning Research
"... Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the 'Netflix problem') to structure-from-motion and position ..."
Abstract
-
Cited by 124 (8 self)
- Add to MetaCart
(Show Context)
Abstract Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the 'Netflix problem') to structure-from-motion and positioning. We study a low complexity algorithm introduced in [1], based on a combination of spectral techniques and manifold optimization, that we call here OPTSPACE. We prove performance guarantees that are order-optimal in a number of circumstances.
Means and Averaging in the Group of Rotations
, 2002
"... In this paper we give precise definitions of different, properly invariant notions of mean or average rotation. Each mean is associated with a metric in SO(3). The metric induced from the Frobenius inner product gives rise to a mean rotation that is given by the closest special orthogonal matrix to ..."
Abstract
-
Cited by 118 (3 self)
- Add to MetaCart
In this paper we give precise definitions of different, properly invariant notions of mean or average rotation. Each mean is associated with a metric in SO(3). The metric induced from the Frobenius inner product gives rise to a mean rotation that is given by the closest special orthogonal matrix to the usual arithmetic mean of the given rotation matrices. The mean rotation associated with the intrinsic metric on SO(3) is the Riemannian center of mass of the given rotation matrices. We show that the Riemannian mean rotation shares many common features with the geometric mean of positive numbers and the geometric mean of positive Hermitian operators. We give some examples with closed-form solutions of both notions of mean.
Dynamic Texture Recognition
, 2001
"... Dynamic textures are sequences of images that exhibit some form of temporal stationarity, such as waves, steam, and foliage. We pose the problem of recognizing and classifying dynamic textures in the space of dynamical systems where each dynamic texture is uniquely represented. Since the space is no ..."
Abstract
-
Cited by 114 (7 self)
- Add to MetaCart
(Show Context)
Dynamic textures are sequences of images that exhibit some form of temporal stationarity, such as waves, steam, and foliage. We pose the problem of recognizing and classifying dynamic textures in the space of dynamical systems where each dynamic texture is uniquely represented. Since the space is non-linear, a distance between models must be defined. We examine three different distances in the space of autoregressive models and assess their power. 1.
Generalized Low Rank Approximations of Matrices
- MACHINE LEARNING
, 2004
"... We consider the problem of computing low rank approximations of matrices. The novelty of our approach is that the low rank approximations are on a sequence of matrices. Unlike the ..."
Abstract
-
Cited by 110 (6 self)
- Add to MetaCart
We consider the problem of computing low rank approximations of matrices. The novelty of our approach is that the low rank approximations are on a sequence of matrices. Unlike the
Optimization algorithms exploiting unitary constraints
- IEEE Trans. Signal Processing
, 2002
"... Abstract—This paper presents novel algorithms that iteratively converge to a local minimum of a real-valued function ( ) sub-ject to the constraint that the columns of the complex-valued ma-trix are mutually orthogonal and have unit norm. The algo-rithms are derived by reformulating the constrained ..."
Abstract
-
Cited by 103 (13 self)
- Add to MetaCart
(Show Context)
Abstract—This paper presents novel algorithms that iteratively converge to a local minimum of a real-valued function ( ) sub-ject to the constraint that the columns of the complex-valued ma-trix are mutually orthogonal and have unit norm. The algo-rithms are derived by reformulating the constrained optimization problem as an unconstrained one on a suitable manifold. This sig-nificantly reduces the dimensionality of the optimization problem. Pertinent features of the proposed framework are illustrated by using the framework to derive an algorithm for computing the eigenvector associated with either the largest or the smallest eigen-value of a Hermitian matrix. Index Terms—Constrained optimization, eigenvalue problems, optimization on manifolds, orthogonal constraints. I.
Conditions for nonnegative independent component analysis
- IEEE Signal Processing Letters
, 2002
"... We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are non-negative. We assume that the random variables s i s are well-grounded in that they have a non-vanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = ..."
Abstract
-
Cited by 96 (12 self)
- Add to MetaCart
(Show Context)
We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are non-negative. We assume that the random variables s i s are well-grounded in that they have a non-vanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = Wx of pre-whitened observations x = QAs, under certain reasonable conditions we show that y is a permutation of the s (apart from a scaling factor) if and only if y is non-negative with probability 1. We suggest that this may enable the construction of practical learning algorithms, particularly for sparse non-negative sources.