Results 1 -
4 of
4
A convergent gradient descent algorithm for rank minimization and semidefinite programming from random linear measurements. arXiv preprint arXiv:1506.06081
, 2015
"... Abstract We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With O(r 3 κ 2 n log n) random measurements of a positive semidefinite n×n matrix of rank r and cond ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Abstract We propose a simple, scalable, and fast gradient descent algorithm to optimize a nonconvex objective for the rank minimization problem and a closely related family of semidefinite programs. With O(r 3 κ 2 n log n) random measurements of a positive semidefinite n×n matrix of rank r and condition number κ, our method is guaranteed to converge linearly to the global optimum.
Fast Algorithms for Robust PCA via Gradient Descent
"... Abstract We consider the problem of Robust PCA in the fully and partially observed settings. Without corruptions, this is the well-known matrix completion problem. From a statistical standpoint this problem has been recently well-studied, and conditions on when recovery is possible (how many observ ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract We consider the problem of Robust PCA in the fully and partially observed settings. Without corruptions, this is the well-known matrix completion problem. From a statistical standpoint this problem has been recently well-studied, and conditions on when recovery is possible (how many observations do we need, how many corruptions can we tolerate) via polynomial-time algorithms is by now understood. This paper presents and analyzes a non-convex optimization approach that greatly reduces the computational complexity of the above problems, compared to the best available algorithms. In particular, in the fully observed case, with r denoting rank and d dimension, we reduce the complexity from O(r 2 d 2 log(1/ε)) to O(rd 2 log(1/ε)) -a big savings when the rank is big. For the partially observed case, we show the complexity of our algorithm is no more than O(r 4 d log d log(1/ε)). Not only is this the best-known run-time for a provable algorithm under partial observation, but in the setting where r is small compared to d, it also allows for near-linear-in-d run-time that can be exploited in the fully-observed case as well, by simply running our algorithm on a subset of the observations.
A Geometric Analysis of Phase Retrieval
"... Abstract Can we recover a complex signal from its Fourier magnitudes? More generally, given a set of m measurements, y k = |a * k x| for k = 1, . . . , m, is it possible to recover x ∈ C n (i.e., length-n complex vector)? This generalized phase retrieval (GPR) problem is a fundamental task in vario ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract Can we recover a complex signal from its Fourier magnitudes? More generally, given a set of m measurements, y k = |a * k x| for k = 1, . . . , m, is it possible to recover x ∈ C n (i.e., length-n complex vector)? This generalized phase retrieval (GPR) problem is a fundamental task in various disciplines, and has been the subject of much recent investigation. Natural nonconvex heuristics often work remarkably well for GPR in practice, but lack clear theoretical explanations. In this paper, we take a step towards bridging this gap. We prove that when the measurement vectors a k 's are generic (i.i.d. complex Gaussian) and the number of measurements is large enough (m ≥ Cn log 3 n), with high probability, a natural least-squares formulation for GPR has the following benign geometric structure: (1) there are no spurious local minimizers, and all global minimizers are equal to the target signal x, up to a global phase; and (2) the objective function has a negative curvature around each saddle point. This structure allows a number of iterative optimization methods to efficiently find a global minimizer, without special initialization. To corroborate the claim, we describe and analyze a second-order trust-region algorithm.
Nonconvex Low Rank Matrix Factorization via Inexact First Order Oracle
"... We study the low rank matrix factorization problem via nonconvex optimization. Com-pared with the convex relaxation approach, nonconvex optimization exhibits superior empirical performance for large scale low rank matrix estimation. However, the understanding of its theo-retical guarantees is limite ..."
Abstract
- Add to MetaCart
We study the low rank matrix factorization problem via nonconvex optimization. Com-pared with the convex relaxation approach, nonconvex optimization exhibits superior empirical performance for large scale low rank matrix estimation. However, the understanding of its theo-retical guarantees is limited. To bridge this gap, we exploit the notion of inexact first order oracle, which naturally appears in low rank matrix factorization problems such as matrix sensing and completion. Particularly, our analysis shows that a broad class of nonconvex optimization algo-rithms, including alternating minimization and gradient-type methods, can be treated as solving two sequences of convex optimization algorithms using inexact first order oracle. Thus we can show that these algorithms converge geometrically to the global optima and recover the true low rank matrices under suitable conditions. Numerical results are provided to support our theory. 1