Results 1  10
of
28
Global registration of multiple point clouds using semidefinite programming. arXiv:1306.5226 [cs.CV
, 2013
"... ABSTRACT. Consider N points in R d and M local coordinate systems that are related through unknown rigid transforms. For each point we are given (possibly noisy) measurements of its local coordinates in some of the coordinate systems. Alternatively, for each coordinate system, we observe the coordin ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
ABSTRACT. Consider N points in R d and M local coordinate systems that are related through unknown rigid transforms. For each point we are given (possibly noisy) measurements of its local coordinates in some of the coordinate systems. Alternatively, for each coordinate system, we observe the coordinates of a subset of the points. The problem of estimating the global coordinates of the N points (up to a rigid transform) from such measurements comes up in distributed approaches to molecular conformation and sensor network localization, and also in computer vision and graphics. The leastsquares formulation, though nonconvex, has a well known closedform solution for the case M = 2 (based on the singular value decomposition). However, no closed form solution is known for M ≥ 3. In this paper, we propose a semidefinite relaxation of the leastsquares formulation, and prove conditions for exact and stable recovery for both this relaxation and for a previously proposed spectral relaxation. In particular, using results from rigidity theory and the theory of semidefinite programming, we prove that the semidefinite relaxation can guarantee recovery under more adversarial measurements compared to the spectral counterpart. We perform numerical experiments on simulated data to confirm the theoretical findings. We empirically demonstrate that (a) unlike the spectral relaxation, the relaxation gap is mostly zero for the semidefinite program (i.e., we are able to solve the original nonconvex problem) up to a certain noise threshold, and (b) the semidefinite program performs significantly better than spectral and manifoldoptimization methods, particularly at large noise levels.
Tightness of the maximum likelihood semidefinite relaxation for angular synchronization. Available online at arXiv:1411.3272 [math.OC
, 2014
"... Abstract Many maximum likelihood estimation problems are, in general, intractable optimization problems. As a result, it is common to approximate the maximum likelihood estimator (MLE) using convex relaxations. Semidefinite relaxations are among the most popular. Sometimes, the relaxations turn out ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract Many maximum likelihood estimation problems are, in general, intractable optimization problems. As a result, it is common to approximate the maximum likelihood estimator (MLE) using convex relaxations. Semidefinite relaxations are among the most popular. Sometimes, the relaxations turn out to be tight. In this paper, we study such a phenomenon. The angular synchronization problem consists in estimating a collection of n phases, given noisy measurements of some of the pairwise relative phases. The MLE for the angular synchronization problem is the solution of a (hard) nonbipartite Grothendieck problem over the complex numbers. It is known that its semidefinite relaxation enjoys worstcase approximation guarantees. In this paper, we consider a stochastic model on the input of that semidefinite relaxation. We assume there is a planted signal (corresponding to a ground truth set of phases) and the measurements are corrupted with random noise. Even though the MLE does not coincide with the planted signal, we show that the relaxation is, with high probability, tight. This holds even for high levels of noise. This analysis explains, for the interesting case of angular synchronization, a phenomenon which has been observed without explanation in many other settings. Namely, the fact that even when exact recovery of the ground truth is impossible, semidefinite relaxations for the MLE tend to be tight (in favorable noise regimes).
CONIC GEOMETRIC OPTIMIZATION ON THE MANIFOLD OF POSITIVE DEFINITE MATRICES∗
"... Abstract. We develop geometric optimization on the manifold of Hermitian positive definite (HPD) matrices. In particular, we consider optimizing two types of cost functions: (i) geodesically convex (gconvex) and (ii) lognonexpansive (LN). Gconvex functions are nonconvex in the usual Euclidean sen ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We develop geometric optimization on the manifold of Hermitian positive definite (HPD) matrices. In particular, we consider optimizing two types of cost functions: (i) geodesically convex (gconvex) and (ii) lognonexpansive (LN). Gconvex functions are nonconvex in the usual Euclidean sense but convex along the manifold and thus allow global optimization. LN functions may fail to be even gconvex but still remain globally optimizable due to their special structure. We develop theoretical tools to recognize and generate gconvex functions as well as cone theoretic fixedpoint optimization algorithms. We illustrate our techniques by applying them to maximumlikelihood parameter estimation for elliptically contoured distributions (a rich class that substantially generalizes the multivariate normal distribution). We compare our fixedpoint algorithms with sophisticated manifold optimization methods and obtain notable speedups.
Matrix Manifold Optimization for Gaussian Mixtures
"... Abstract We take a new look at parameter estimation for Gaussian Mixture Model (GMMs). Specifically, we advance Riemannian manifold optimization (on the manifold of positive definite matrices) as a potential replacement for Expectation Maximization (EM), which has been the de facto standard for dec ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract We take a new look at parameter estimation for Gaussian Mixture Model (GMMs). Specifically, we advance Riemannian manifold optimization (on the manifold of positive definite matrices) as a potential replacement for Expectation Maximization (EM), which has been the de facto standard for decades. An outofthebox invocation of Riemannian optimization, however, fails spectacularly: it obtains the same solution as EM, but vastly slower. Building on intuition from geometric convexity, we propose a simple reformulation that has remarkable consequences: it makes Riemannian optimization not only match EM (a nontrivial result on its own, given the poor record nonlinear programming has had against EM), but also outperform it in many settings. To bring our ideas to fruition, we develop a welltuned Riemannian LBFGS method that proves superior to known competing methods (e.g., Riemannian conjugate gradient). We hope that our results encourage a wider consideration of manifold optimization in machine learning and statistics.
Robust LowRank Matrix Completion by Riemannian Optimization
"... Lowrank matrix completion is the problem where one tries to recover a lowrank matrix from noisy observations of a subset of its entries. In this paper, we propose RMC, a new method to deal with the problem of robust lowrank matrix completion, i.e., matrix completion where a fraction of the observ ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Lowrank matrix completion is the problem where one tries to recover a lowrank matrix from noisy observations of a subset of its entries. In this paper, we propose RMC, a new method to deal with the problem of robust lowrank matrix completion, i.e., matrix completion where a fraction of the observed entries are corrupted by nonGaussian noise, typically outliers. The method relies on the idea of smoothing the `1 norm and using Riemannian optimization to deal with the lowrank constraint. We first state the algorithms as the successive minimization of smooth approximations of the `1 norm and we analyze its convergence by showing the strict decrease of the objective function. We then perform numerical experiments on synthetic data and demonstrate the effectiveness on the proposed method on the Netflix dataset.
Incoherent Dictionary Learning Method Based on Unit Norm Tight Frame and Manifold Optimization for Sparse Representation
"... Optimizing the mutual coherence of a learned dictionary plays an important role in sparse representation and compressed sensing. In this paper, a efficient framework is developed to learn an incoherent dictionary for sparse representation. In particular, the coherence of a previous dictionary (or G ..."
Abstract
 Add to MetaCart
(Show Context)
Optimizing the mutual coherence of a learned dictionary plays an important role in sparse representation and compressed sensing. In this paper, a efficient framework is developed to learn an incoherent dictionary for sparse representation. In particular, the coherence of a previous dictionary (or Gram matrix) is reduced sequentially by finding a new dictionary (or Gram matrix), which is closest to the reference unit norm tight frame of the previous dictionary (or Gram matrix). The optimization problem can be solved by restricting the tightness and coherence alternately at each iteration of the algorithm. The significant and different aspect of our proposed framework is that the learned dictionary can approximate an equiangular tight frame. Furthermore, manifold optimization is used to avoid the degeneracy of sparse representation while only reducing the coherence of the learned dictionary. This can be performed after the dictionary update process rather than during the dictionary update process. Experiments on synthetic and real audio data show that our proposed methods give notable improvements in lower coherence, have faster running times, and are extremely robust compared to several existing methods.
Lowrank tensor completion: a Riemannian manifold preconditioning approach Hiroyuki Kasai
"... Abstract We propose a novel Riemannian manifold preconditioning approach for the tensor completion problem with rank constraint. A novel Riemannian metric or inner product is proposed that exploits the leastsquares structure of the cost function and takes into account the structured symmetry that ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We propose a novel Riemannian manifold preconditioning approach for the tensor completion problem with rank constraint. A novel Riemannian metric or inner product is proposed that exploits the leastsquares structure of the cost function and takes into account the structured symmetry that exists in Tucker decomposition. The specific metric allows to use the versatile framework of Riemannian optimization on quotient manifolds to develop preconditioned nonlinear conjugate gradient and stochastic gradient descent algorithms for batch and online setups, respectively. Concrete matrix representations of various optimizationrelated ingredients are listed. Numerical comparisons suggest that our proposed algorithms robustly outperform stateoftheart algorithms across different synthetic and realworld datasets.
SAMANTA, SELVAN, DAS: SEQUENTIAL DOMAIN SHIFT Modeling Sequential Domain Shift through Estimation of Optimal Subspaces for Categorization Visualization and Perception Lab
"... Abstract This paper describes a new method of unsupervised domain adaptation (DA) using the properties of the subspaces spanning the source and target domains, when projected along a path in the Grassmann manifold. Our proposed method uses both the geometrical and the statistical properties of the ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract This paper describes a new method of unsupervised domain adaptation (DA) using the properties of the subspaces spanning the source and target domains, when projected along a path in the Grassmann manifold. Our proposed method uses both the geometrical and the statistical properties of the subspaces spanning the two domains to estimate a sequence of optimal intermediary subspaces. This creates a path of shortest length between the subspaces of source and target domains, where the distributions of the projected source and target domain data are identical when projected onto these intermediate subspaces (lying along the path). We extend our concept to the kernel space and perform nonlinear projections on the subspaces using kernel trick. Projections of the source and target domains onto these intermediary subspaces are used to obtain the incremental (or gradual) change in the geometrical as well as the statistical properties of subspaces spanning the source and target domains. Results on object and event categorization using realworld datasets, show that our proposed optimal path in the Grassmann manifold produces better results for the problem of DA than the usual geodesic path.
3D Shape Reconstruction from 2D Landmarks: A Convex Formulation
, 2014
"... We investigate the problem of reconstructing the 3D shape of an object, given a set of 2D landmarks in a single image. To alleviate the reconstruction ambiguity, a widelyused approach is to confine the unknown 3D shape within a shape space built upon existing shapes. While this approach has proven ..."
Abstract
 Add to MetaCart
We investigate the problem of reconstructing the 3D shape of an object, given a set of 2D landmarks in a single image. To alleviate the reconstruction ambiguity, a widelyused approach is to confine the unknown 3D shape within a shape space built upon existing shapes. While this approach has proven to be successful in various applications, a challenging issue remains, i.e., the joint estimation of shape parameters and camerapose parameters requires to solve a nonconvex optimization problem. The existing methods often adopt an alternating minimization scheme to locally update the parameters, and consequently the solution is sensitive to initialization. In this paper, we propose a convex formulation to address this problem and develop an efficient algorithm to solve the proposed convex program. We demonstrate the exact recovery property of the proposed method, its merits compared to the alternative methods, and the applicability in human pose, car, and face reconstruction.
3D Shape Estimation from 2D Landmarks: A Convex Relaxation Approach
, 2015
"... We investigate the problem of estimating the 3D shape of an object, given a set of 2D landmarks in a single image. To alleviate the reconstruction ambiguity, a widelyused aproach is to confine the unknown 3D shape within a shape space built upon existing shapes. While this approach has proven to be ..."
Abstract
 Add to MetaCart
We investigate the problem of estimating the 3D shape of an object, given a set of 2D landmarks in a single image. To alleviate the reconstruction ambiguity, a widelyused aproach is to confine the unknown 3D shape within a shape space built upon existing shapes. While this approach has proven to be successful in various applications, a challenging issue remains, i.e., the joint estimation of shape parameters and camerapose parameters requires to solve a nonconvex optimization problem. The existing methods often adopt an alternating minimization scheme to locally update the parameters, and consequently the solution is sensitive to initialization. In this paper, we propose a convex formulation to address this problem and develop an efficient algorithm to solve the proposed convex program. We demonstrate the exact recovery property of the proposed method, its merits compared to alternative methods, and the applicability in human pose and car shape estimation.