• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Guaranteed minimum-rank solutions of linear matrix equations via nuclear norm minimization (0)

by B Recht, M Fazel, P Parrilo
Venue:SIAM Review
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 563
Next 10 →

A Singular Value Thresholding Algorithm for Matrix Completion

by Jian-Feng Cai, Emmanuel J. Candès, Zuowei Shen , 2008
"... This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of reco ..."
Abstract - Cited by 555 (22 self) - Add to MetaCart
This paper introduces a novel algorithm to approximate the matrix with minimum nuclear norm among all matrices obeying a set of convex constraints. This problem may be understood as the convex relaxation of a rank minimization problem, and arises in many important applications as in the task of recovering a large matrix from a small subset of its entries (the famous Netflix problem). Off-the-shelf algorithms such as interior point methods are not directly amenable to large problems of this kind with over a million unknown entries. This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank. The algorithm is iterative and produces a sequence of matrices {X k, Y k} and at each step, mainly performs a soft-thresholding operation on the singular values of the matrix Y k. There are two remarkable features making this attractive for low-rank matrix completion problems. The first is that the soft-thresholding operation is applied to a sparse matrix; the second is that the rank of the iterates {X k} is empirically nondecreasing. Both these facts allow the algorithm to make use of very minimal storage space and keep the computational cost of each iteration low. On

FINDING STRUCTURE WITH RANDOMNESS: PROBABILISTIC ALGORITHMS FOR CONSTRUCTING APPROXIMATE MATRIX DECOMPOSITIONS

by N. Halko, P. G. Martinsson, J. A. Tropp
"... Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for ..."
Abstract - Cited by 253 (6 self) - Add to MetaCart
Low-rank matrix approximations, such as the truncated singular value decomposition and the rank-revealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys and extends recent research which demonstrates that randomization offers a powerful tool for performing low-rank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired low-rank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider the model problem of finding the k dominant components of the singular value decomposition
(Show Context)

Citation Context

...er, researchers began to study compressive sampling for matrices. In 2007, Recht, Fazel, and Parillo demonstrated that it is possible to reconstruct a rank-deficient matrix from Gaussian measurements =-=[112]-=-. More recently, Candès and Recht [22] and Candès and Tao [23] considered the problem of completing a low-rank matrix from a random sample of its entries. The usual goals of compressive sampling are (...

Rank-sparsity incoherence for matrix decomposition

by Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo, Alan S. Willsky , 2010
"... Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification, and is intractable ..."
Abstract - Cited by 230 (21 self) - Add to MetaCart
Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification, and is intractable to solve in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components, by minimizing a linear combination of the ℓ1 norm and the nuclear norm of the components. We develop a notion of rank-sparsity incoherence, expressed as an uncertainty principle between the sparsity pattern of a matrix and its row and column spaces, and use it to characterize both fundamental identifiability as well as (deterministic) sufficient conditions for exact recovery. Our analysis is geometric in nature with the tangent spaces to the algebraic varieties of sparse and low-rank matrices playing a prominent role. When the sparse and low-rank matrices are drawn from certain natural random ensembles, we show that the sufficient conditions for exact recovery are satisfied with high probability. We conclude with simulation results on synthetic matrix decomposition problems.
(Show Context)

Citation Context

...t was used to recover low-rank positive semidefinite matrices [22]. Indeed, several papers demonstrate that the nuclear norm heuristic recovers low-rank matrices in various rank minimization problems =-=[24, 4]-=-. Based on these results, we propose the following optimization formulation to recover A ⋆ and B ⋆ given C = A ⋆ + B ⋆ : ( Â, ˆ B) = arg min A,B γ‖A‖1 + ‖B‖∗ s.t. A + B = C. (1.3) Here γ is a paramete...

A unified framework for high-dimensional analysis of M-estimators with decomposable regularizers

by Sahand Negahban, Pradeep Ravikumar, Martin J. Wainwright, Bin Yu
"... ..."
Abstract - Cited by 218 (32 self) - Add to MetaCart
Abstract not found

The Convex Geometry of Linear Inverse Problems

by Venkat Chandrasekaran, Benjamin Recht, Pablo A. Parrilo, Alan S. Willsky , 2010
"... In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constr ..."
Abstract - Cited by 189 (20 self) - Add to MetaCart
In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered are those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases such as sparse vectors (e.g., signal processing, statistics) and low-rank matrices (e.g., control, statistics), as well as several others including sums of a few permutations matrices (e.g., ranked elections, multiobject tracking), low-rank tensors (e.g., computer vision, neuroscience), orthogonal matrices (e.g., machine learning), and atomic measures (e.g., system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the atomic norm. The facial

Recovering low-rank matrices from few coefficients in any basis

by David Gross , 2010
"... ..."
Abstract - Cited by 187 (3 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...ing a lowrank matrix from a small number of expansion coefficients with respect to some basis in the space of matrices. Related questions have recently enjoyed a substantial amount of attention (c.f. =-=[1]-=-, [2], [3], [4], [5], [6], [7] for a highly incomplete list of references). To get some intuition for the problem, note that one needs roughly rn parameters to specify an n×n-matrix ρ of rank r. There...

An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems

by Kim-chuan Toh, Sangwoon Yun , 2009
"... ..."
Abstract - Cited by 184 (9 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...p×n , and min x∈ℜ n { ‖x‖1 : Ax = b } . (4) The problem (4) has attracted much interest in compressed sensing [8, 9, 10, 14, 15] and is also known as the basis pursuit problem. Recently, Recht et al. =-=[38]-=- established analogous theoretical results in the compressed sensing literature for the pair (1) and (2). In the basis pursuit problem (4), b is a vector of measurements of the signal x obtained by us...

SLEP: Sparse Learning with Efficient Projections

by Jun Liu, Shuiwang Ji, Jieping Ye , 2010
"... ..."
Abstract - Cited by 124 (22 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

... of the rank function over the unit ball of spectral norm. A number of recent work has shown that the low rank solution can be recovered exactly via minimizing the trace norm under certain conditions =-=[61, 62, 8]-=-. Trace norm regularized problems can also be considered as a form of sparse learning, since the trace norm equals to the ℓ1 norm of the vector consisting of singular values. The optimization problems...

Zero Duality Gap in Optimal Power Flow Problem

by Javad Lavaei, Steven H. Low , 2012
"... The optimal power flow (OPF) problem is nonconvex and generally hard to solve. In this paper, we propose a semidefinite programming (SDP) optimization, which is the dual of an equivalent form of the OPF problem. A global optimum solution to the OPF problem can be retrieved from a solution of this co ..."
Abstract - Cited by 117 (25 self) - Add to MetaCart
The optimal power flow (OPF) problem is nonconvex and generally hard to solve. In this paper, we propose a semidefinite programming (SDP) optimization, which is the dual of an equivalent form of the OPF problem. A global optimum solution to the OPF problem can be retrieved from a solution of this convex dual problem whenever the duality gap is zero. A necessary and sufficient condition is provided in this paper to guarantee the existence of no duality gap for the OPF problem. This condition is satisfied by the standard IEEE benchmark systems with 14, 30, 57, 118 and 300 buses as well as several randomly generated systems. Since this condition is hard to study, a sufficient zero-duality-gap condition is also derived. This sufficient condition holds for IEEE systems after small resistance (10 −5 per unit) is added to every transformer that originally assumes zero resistance. We investigate this sufficient condition and justify that it holds widely in practice. The main underlying reason for the successful convexification of the OPF problem can be traced back to the modeling of transformers and transmission lines as well as the non-negativity of physical quantities such as resistance and inductance.

An Accelerated Gradient Method for Trace Norm Minimization

by Shuiwang Ji, Jieping Ye
"... We consider the minimization of a smooth loss function regularized by the trace norm of the matrix variable. Such formulation finds applications in many machine learning tasks including multi-task learning, matrix classification, and matrix completion. The standard semidefinite programming formulati ..."
Abstract - Cited by 111 (7 self) - Add to MetaCart
We consider the minimization of a smooth loss function regularized by the trace norm of the matrix variable. Such formulation finds applications in many machine learning tasks including multi-task learning, matrix classification, and matrix completion. The standard semidefinite programming formulation for this problem is computationally expensive. In addition, due to the non-smooth nature of the trace norm, the optimal first-order black-box method for solving such class of problems converges as O ( 1 √), where k is the k iteration counter. In this paper, we exploit the special structure of the trace norm, based on which we propose an extended gradient algorithm that converges as O ( 1 k). We further propose an accelerated gradient algorithm, which achieves the optimal convergence rate of O ( 1 k 2) for smooth problems. Experiments on multi-task learning problems demonstrate the efficiency of the proposed algorithms. 1.
(Show Context)

Citation Context

...of the rank function over the unit ball of spectral norm. A number of recent work has shown that the low rank solution can be recovered exactly via minimizing the trace norm under certain conditions (=-=Recht et al., 2008-=-a; Recht et al., 2008b; Candés & Recht, 2008). In practice, the trace norm relaxation has been shown to yield low-rank solutions and it has been used widely in many scenarios. In (Srebro et al., 2005;...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University