Results 1  10
of
91
The geometry of algorithms with orthogonality constraints
 SIAM J. MATRIX ANAL. APPL
, 1998
"... In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal proces ..."
Abstract

Cited by 642 (1 self)
 Add to MetaCart
In this paper we develop new Newton and conjugate gradient algorithms on the Grassmann and Stiefel manifolds. These manifolds represent the constraints that arise in such areas as the symmetric eigenvalue problem, nonlinear eigenvalue problems, electronic structures computations, and signal processing. In addition to the new algorithms, we show how the geometrical framework gives penetrating new insights allowing us to create, understand, and compare algorithms. The theory proposed here provides a taxonomy for numerical linear algebra algorithms that provide a top level mathematical view of previously unrelated algorithms. It is our hope that developers of new algorithms and perturbation theories will benefit from the theory, methods, and examples in this paper.
Means and Averaging in the Group of Rotations
, 2002
"... In this paper we give precise definitions of different, properly invariant notions of mean or average rotation. Each mean is associated with a metric in SO(3). The metric induced from the Frobenius inner product gives rise to a mean rotation that is given by the closest special orthogonal matrix to ..."
Abstract

Cited by 114 (3 self)
 Add to MetaCart
In this paper we give precise definitions of different, properly invariant notions of mean or average rotation. Each mean is associated with a metric in SO(3). The metric induced from the Frobenius inner product gives rise to a mean rotation that is given by the closest special orthogonal matrix to the usual arithmetic mean of the given rotation matrices. The mean rotation associated with the intrinsic metric on SO(3) is the Riemannian center of mass of the given rotation matrices. We show that the Riemannian mean rotation shares many common features with the geometric mean of positive numbers and the geometric mean of positive Hermitian operators. We give some examples with closedform solutions of both notions of mean.
Newton’s method on Riemannian manifolds: convariant alpha theory
 IMA J. Numer. Anal
, 2003
"... In this paper, Smale’s α theory is generalized to the context of intrinsic Newton iteration on geodesically complete analytic Riemannian and Hermitian manifolds. Results are valid for analytic mappings from a manifold to a linear space of the same dimension, or for analytic vector fields on the mani ..."
Abstract

Cited by 40 (2 self)
 Add to MetaCart
(Show Context)
In this paper, Smale’s α theory is generalized to the context of intrinsic Newton iteration on geodesically complete analytic Riemannian and Hermitian manifolds. Results are valid for analytic mappings from a manifold to a linear space of the same dimension, or for analytic vector fields on the manifold. The invariant γ is defined by means of high order covariant derivatives. Bounds on the size of the basin of quadratic convergence are given. If the ambient manifold has negative sectional curvature, those bounds depend on the curvature. A criterion of quadratic convergence for Newton iteration from the information available at a point is also given. 1 Introduction and main results. Numerical problems posed in manifolds arise in many natural contexts. Classical examples are given by the eigenvalue problem, the symmetric eigenvalue problem, invariant subspace computations, minimization problems with orthogonality constraints, optimization problems with equality constraints... etc. In the first
Newton’s method for overdetermined systems of equations
 Mathematics of Computation 69 (2000), 1099–1115. MR 2000j:65133
"... Abstract. Complexity theoretic aspects of continuation methods for the solution of square or underdetermined systems of polynomial equations have been studied by various authors. In this paper we consider overdetermined systems where there are more equations than unknowns. We study Newton’s method f ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
Abstract. Complexity theoretic aspects of continuation methods for the solution of square or underdetermined systems of polynomial equations have been studied by various authors. In this paper we consider overdetermined systems where there are more equations than unknowns. We study Newton’s method for such a system. I.
Lowrank matrix completion by riemannian optimization
 ANCHPMATHICSE, Mathematics Section, École Polytechnique Fédérale de
"... The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorit ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
(Show Context)
The matrix completion problem consists of finding or approximating a lowrank matrix based on a few samples of this matrix. We propose a novel algorithm for matrix completion that minimizes the least square distance on the sampling set over the Riemannian manifold of fixedrank matrices. The algorithm is an adaptation of classical nonlinear conjugate gradients, developed within the framework of retractionbased optimization on manifolds. We describe all the necessary objects from differential geometry necessary to perform optimization over this lowrank matrix manifold, seen as a submanifold embedded in the space of matrices. In particular, we describe how metric projection can be used as retraction and how vector transport lets us obtain the conjugate search directions. Additionally, we derive secondorder models that can be used in Newton’s method based on approximating the exponential map on this manifold to second order. Finally, we prove convergence of a regularized version of our algorithm under the assumption that the restricted isometry property holds for incoherent matrices throughout the iterations. The numerical experiments indicate that our approach scales very well for largescale problems and compares favorable with the stateoftheart, while outperforming most existing solvers. 1
Nonlinear Mean Shift over Riemannian Manifolds
, 2009
"... The original mean shift algorithm is widely applied for nonparametric clustering in vector spaces. In this paper we generalize it to data points lying on Riemannian manifolds. This allows us to extend mean shift based clustering and filtering techniques to a large class of frequently occurring non ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
The original mean shift algorithm is widely applied for nonparametric clustering in vector spaces. In this paper we generalize it to data points lying on Riemannian manifolds. This allows us to extend mean shift based clustering and filtering techniques to a large class of frequently occurring nonvector spaces in vision. We present an exact algorithm and prove its convergence properties as opposed to previous work which approximates the mean shift vector. The computational details of our algorithm are presented for frequently occurring classes of manifolds such as matrix Lie groups, Grassmann manifolds, essential matrices and symmetric positive definite matrices. Applications of the mean shift over these manifolds are shown.
Constrained flows of matrixvalued functions: Application to diffusion tensor regularization
 In European Conference on Computer Vision
, 2002
"... Abstract. Nonlinear partial differential equations (PDE) are now widely used to regularize images. They allow to eliminate noise and artifacts while preserving large global features, such as object contours. In this context, we propose a geometric framework to design PDE flows actingon constrained d ..."
Abstract

Cited by 36 (10 self)
 Add to MetaCart
(Show Context)
Abstract. Nonlinear partial differential equations (PDE) are now widely used to regularize images. They allow to eliminate noise and artifacts while preserving large global features, such as object contours. In this context, we propose a geometric framework to design PDE flows actingon constrained datasets. We focus our interest on flows of matrixvalued functions undergoing orthogonal and spectral constraints. The correspondingevolution PDE’s are found by minimization of cost functionals, and depend on the natural metrics of the underlyingconstrained manifolds (viewed as Lie groups or homogeneous spaces). Suitable numerical schemes that fit the constraints are also presented. We illustrate this theoretical framework through a recent and challenging problem in medical imaging: the regularization of diffusion tensor volumes (DTMRI).
The geometry of the Newton method on noncompact Lie groups
 J. Global Optimiz
"... Abstract. An important class of optimization problems involve minimizing a cost function on a Lie group. In the case where the Lie group is noncompact there is no natural choice of a Riemannian metric and it is not possible to apply recent results on the optimization of functions on Riemannian mani ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
(Show Context)
Abstract. An important class of optimization problems involve minimizing a cost function on a Lie group. In the case where the Lie group is noncompact there is no natural choice of a Riemannian metric and it is not possible to apply recent results on the optimization of functions on Riemannian manifolds. In this paper the invariant structure of a Lie group is exploited to provide a strong interpretation of a Newton iteration on a general Lie group. The paper unifies several previous algorithms proposed in the literature in a single theoretical framework. Local asymptotic quadratic convergence is proved for the algorithms considered. 1.
F.: Kantorovich’s theorem on Newton’s method in Riemannian manifolds
 J. Complexity
, 2002
"... Newton’s method for finding a zero of a vectorial function is a powerful theoretical and practical tool. One of the drawbacks of the classical convergence proof is that closeness to a nonsingular zero must be supposed a priori. Kantorovich’s Theorem on Newton’s Method has the advantage of proving ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
Newton’s method for finding a zero of a vectorial function is a powerful theoretical and practical tool. One of the drawbacks of the classical convergence proof is that closeness to a nonsingular zero must be supposed a priori. Kantorovich’s Theorem on Newton’s Method has the advantage of proving existence of a solution and convergence to it under very mild conditions. This theorem holds in Banach spaces. Newton’s Method has been extended to the problem of finding a singularity of a vectorial field in Riemannian manifold. We extend Kantorovich’s Theorem on Newton’s Method to Riemannian manifolds.