#### DMCA

## Necessary and Sufficient Conditions for Success of the Nuclear Norm Heuristic for Rank Minimization (2008)

### Cached

### Download Links

Citations: | 36 - 2 self |

### Citations

7710 | Matrix Analysis - Horn, Johnson - 1988 |

2620 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
- 2006
(Show Context)
Citation Context ... on seminal developments in “compressed sensing” that determined conditions for when minimizing the ℓ1 norm of a vector over an affine space returns the sparsest vector in that space (see, e.g., [4], =-=[3]-=-, [1]). There is a strong parallelism between the sparse approximation and rank minimization settings. The rank of a diagonal matrix is equal to the number of non-zeros on the diagonal. Similarly, the... |

1398 | Decoding by linear programming
- Candès, Tao
- 2005
(Show Context)
Citation Context ...uilds on seminal developments in “compressed sensing” that determined conditions for when minimizing the ℓ1 norm of a vector over an affine space returns the sparsest vector in that space (see, e.g., =-=[4]-=-, [3], [1]). There is a strong parallelism between the sparse approximation and rank minimization settings. The rank of a diagonal matrix is equal to the number of non-zeros on the diagonal. Similarly... |

1365 | Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones
- Sturm
- 1999
(Show Context)
Citation Context ...was sampled from the Gaussian ensemble with m rows and n2 columns. Then the nuclear norm minimization minimize subject to ‖X‖∗ A vec X = A vec Y0 was solved using the freely available software SeDuMi =-=[18]-=- using the semidefinite programming formulation described in [14]. On a 2.0 GHz Laptop, each semidefinite program could be solved in less than two minutes for 40 × 40 dimensional X. We declared Y0 to ... |

625 | A simple proof of the Restricted Isometry Property for random matrices
- Baraniuk, Davenport, et al.
(Show Context)
Citation Context ...eminal developments in “compressed sensing” that determined conditions for when minimizing the ℓ1 norm of a vector over an affine space returns the sparsest vector in that space (see, e.g., [4], [3], =-=[1]-=-). There is a strong parallelism between the sparse approximation and rank minimization settings. The rank of a diagonal matrix is equal to the number of non-zeros on the diagonal. Similarly, the sum ... |

523 | The Geometry of Graphs and Some of its Algorithmic Applications
- Linial, London, et al.
- 1995
(Show Context)
Citation Context ...duction [2]. Rank minimization is also of interest to a broader optimization community in a variety of applications including inference with partial information [16] and embedding in Euclidean spaces =-=[11]-=-. In certain instances with special structure, the rank minimization problem can be solved via the singular value decomposition or can be reduced to the solution of a linear system [12], [13]. In gene... |

453 | Spectral analysis of large dimensional random matrices
- Bai, Silverstein
- 2010
(Show Context)
Citation Context ...mpled from G(D). Then E‖G‖∗ = DEσi = 8 3π D3/2 + q(D) (3.13) where q(D)/D 3/2 = o(1). The constant in from of the D 3/2 comes from integrating √ λ against the Marčenko-Pastur distribution (see, e.g., =-=[17, 2]-=-): 1 2π ∫ 4 Secondly, a straightforward calculation reveals 0 σ(‖G‖∗) = √ 4 − t dt = 8 3π ≈ 0.85 . sup ‖G‖F = ‖H‖≤1 √ D . Plugging these values in with the appropriate dimensions completes the proof. ... |

428 |
Probability in Banach Spaces
- Ledoux, Talagrand
- 1991
(Show Context)
Citation Context ...riable have Gaussian tails. Theorem 3.5 Let x be a normally distributed random vector and let f be a function with Lipshitz constant L. Then ( P[|f(x) − E[f(x)]| ≥ t] ≤ 2 exp − t2 2L2 ) . (3.5) 9See =-=[15]-=- for a proof of this theorem with slightly weaker constants and several references for more complicated proofs that give rise to this concentration inequality. The following Lemma bounds the Lipshitz ... |

362 |
Distributions of eigenvalues for some sets of random matrices
- Marchenko, Pastur
- 1967
(Show Context)
Citation Context ...mpled from G(D). Then E‖G‖∗ = DEσi = 8 3π D3/2 + q(D) (3.13) where q(D)/D 3/2 = o(1). The constant in from of the D 3/2 comes from integrating √ λ against the Marčenko-Pastur distribution (see, e.g., =-=[17, 2]-=-): 1 2π ∫ 4 Secondly, a straightforward calculation reveals 0 σ(‖G‖∗) = √ 4 − t dt = 8 3π ≈ 0.85 . sup ‖G‖F = ‖H‖≤1 √ D . Plugging these values in with the appropriate dimensions completes the proof. ... |

286 |
Matrix Rank Minimization with Applications
- Fazel
- 2001
(Show Context)
Citation Context ...re one minimizes the trace of a positive semidefinite decision variable instead of the rank (see, e.g., [2], [12]). A generalization of this heuristic to non-symmetric matrices introduced by Fazel in =-=[8]-=- minimizes the nuclear norm, *Center for the Mathematics of Information, California Institute of Technology, 1200 E California Blvd, Pasadena, CA brecht@ist.caltech.edu ** Electrical Engineering, Cali... |

274 | A rank minimization heuristic with application to minimum order system approximation.
- Fazel, Hindi, et al.
- 2001
(Show Context)
Citation Context ... Optimization problems involving constraints on the rank of matrices are pervasive in control applications, arising in the context of low-order controller design [7], [12], minimal realization theory =-=[9]-=-, and model reduction [2]. Rank minimization is also of interest to a broader optimization community in a variety of applications including inference with partial information [16] and embedding in Euc... |

270 | Unsupervised learning of image manifolds by semidefinite programming.
- Weinberger, Saul
- 2004
(Show Context)
Citation Context ...oller design [9, 19], minimal realization theory [11], and model reduction [4]. In Machine Learning, problems in inference with partial information [23], multi-task learning [1],and manifold learning =-=[28]-=- have been formulated as rank minimization problems. Rank minimization also plays a key role in the study of embeddings of discrete metric spaces in Euclidean space [16]. In certain instances with spe... |

258 | Convex multi-task feature learning.
- Argyriou, Evgeniou, et al.
- 2008
(Show Context)
Citation Context ...context of low-order controller design [9, 19], minimal realization theory [11], and model reduction [4]. In Machine Learning, problems in inference with partial information [23], multi-task learning =-=[1]-=-,and manifold learning [28] have been formulated as rank minimization problems. Rank minimization also plays a key role in the study of embeddings of discrete metric spaces in Euclidean space [16]. In... |

246 | Fast maximum margin matrix factorization for collaborative prediction
- Rennie, Srebro
- 2005
(Show Context)
Citation Context ...al realization theory [9], and model reduction [2]. Rank minimization is also of interest to a broader optimization community in a variety of applications including inference with partial information =-=[16]-=- and embedding in Euclidean spaces [11]. In certain instances with special structure, the rank minimization problem can be solved via the singular value decomposition or can be reduced to the solution... |

98 |
The one-sided barrier problem for Gaussian noise
- Slepian
- 1962
(Show Context)
Citation Context ...um of one Gaussian process is greater to that of another. Elementary proofs of both of these Theorems and several other Comparison Theorems can be found in §3.3 of [15]. Theorem 3.10 (Slepian’s Lemma =-=[24]-=-) Let X and Y by Gaussian random variables in RN such that { E[XiXj] ≤ E[YiYj] for all i ̸= j Then E[X 2 i ] = E[Yi] 2 for all i E[max i Yi] ≤ E[max i Xi] . ] 12Theorem 3.11 (Gordan [12, 13]) Let X =... |

71 |
Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization,” [Online]. Available: http://www.ist.caltech.edu/~brecht/publications.html
- Recht, Fazel, et al.
(Show Context)
Citation Context ... available in cases that could also be solved by elementary linear algebra [13]. The first non-trivial sufficient conditions that guaranteed the success of the nuclear norm heuristic were provided in =-=[14]-=-. Focusing on the special case where one seeks the lowest rank matrix in an affine subspace, the authors provide a “restricted isometery” condition on the linear map defining the affine subspace which... |

65 |
Neighborliness of randomly projected simplices in high dimensions
- Donoho, Tanner
(Show Context)
Citation Context ... were able to extend much of the analysis developed for the ℓ1 heuristic to provide guarantees for the nuclear norm heuristic. Building on a different collection of developments in compressed sensing =-=[5]-=-, [6], [17], we present a necessary and sufficient condition for the solution of the nuclear norm heuristic to coincide with the minimum rank solution in an affine space. The condition characterizes a... |

61 |
On the rank minimization problem over a positive semidefinite linear matrix inequality.
- Mesbahi, Papavassilopoulos
- 1997
(Show Context)
Citation Context ...ptotic scenarios. I. INTRODUCTION Optimization problems involving constraints on the rank of matrices are pervasive in control applications, arising in the context of low-order controller design [7], =-=[12]-=-, minimal realization theory [9], and model reduction [2]. Rank minimization is also of interest to a broader optimization community in a variety of applications including inference with partial infor... |

42 | Rank minimization under LMI constraints: A framework for output feedback problems.
- Ghaoui, Gahinet
- 1993
(Show Context)
Citation Context ...-asymptotic scenarios. I. INTRODUCTION Optimization problems involving constraints on the rank of matrices are pervasive in control applications, arising in the context of low-order controller design =-=[7]-=-, [12], minimal realization theory [9], and model reduction [2]. Rank minimization is also of interest to a broader optimization community in a variety of applications including inference with partial... |

23 |
Compressed sensing - probabilistic analysis of a null-space characterization”IEEE
- Stojnic, Hassibi
- 2009
(Show Context)
Citation Context ... to extend much of the analysis developed for the ℓ1 heuristic to provide guarantees for the nuclear norm heuristic. Building on a different collection of developments in compressed sensing [5], [6], =-=[17]-=-, we present a necessary and sufficient condition for the solution of the nuclear norm heuristic to coincide with the minimum rank solution in an affine space. The condition characterizes a particular... |

19 |
Computational study and comparisons of LFT reducibility methods.
- Beck, D’Andrea
- 1998
(Show Context)
Citation Context ...volving constraints on the rank of matrices are pervasive in control applications, arising in the context of low-order controller design [7], [12], minimal realization theory [9], and model reduction =-=[2]-=-. Rank minimization is also of interest to a broader optimization community in a variety of applications including inference with partial information [16] and embedding in Euclidean spaces [11]. In ce... |

16 | Rank minimization via online learning.
- Meka, Jain, et al.
- 2008
(Show Context)
Citation Context ... → Rm be a linear map, and let b ∈ Rm . The main optimization problem under study is minimize rank(X) (1.1) subject to A(X) = b . 2This problem is known to be NP-HARD and is also hard to approximate =-=[18]-=-. As mentioned above, a popular heuristic for this problem replaces the rank function with the sum of the singular values of the decision variable. Let σi(X) denote the i-th largest singular value of ... |

16 |
Metric entropy of homogeneous spaces. In Quantum probability (Gdansk,
- Szarek
- 1997
(Show Context)
Citation Context ...operators onto r-dimensional subspaces, we will have proved the Strong Bound. To proceed, we need to know the size of an ɛ-net. The following bound on such a net is due to Szarek. Theorem 3.7 (Szarek =-=[27]-=-) Consider the space of all projection operators on R n projecting onto r dimensional subspaces endowed with the metric d(P, P ′ ) = ‖P − P ′ ‖ Then there exists an ɛ-net in this metric space with car... |

8 |
On cone-invariant linear matrix inequalities.
- Parrilo, Khatri
- 2000
(Show Context)
Citation Context ...n spaces [11]. In certain instances with special structure, the rank minimization problem can be solved via the singular value decomposition or can be reduced to the solution of a linear system [12], =-=[13]-=-. In general, however, minimizing the rank of a matrix subject to convex constraints is NPHARD. The best exact algorithms for this problem involve quantifier elimination and such solution methods requ... |

4 |
Some inequalities for Gaussian processes and applications
- Gordan
- 1985
(Show Context)
Citation Context ...pian’s Lemma [24]) Let X and Y by Gaussian random variables in RN such that { E[XiXj] ≤ E[YiYj] for all i ̸= j Then E[X 2 i ] = E[Yi] 2 for all i E[max i Yi] ≤ E[max i Xi] . ] 12Theorem 3.11 (Gordan =-=[12, 13]-=-) Let X = (Xij) and Y = (Yij) be Gaussian random vectors in RN1×N2 such that ⎧ ⎪⎨ E[XijXik] ≤ E[YijYik] for all i, j, k Then E[XijXlk] ≥ E[YijYlk] ⎪⎩ E[X 2 ij ] = E[X2 ij ] E[min i max j for all i ̸= ... |

4 |
processes and almost spherical sections of convex bodies
- Gaussian
- 1988
(Show Context)
Citation Context ...pian’s Lemma [24]) Let X and Y by Gaussian random variables in RN such that { E[XiXj] ≤ E[YiYj] for all i ̸= j Then E[X 2 i ] = E[Yi] 2 for all i E[max i Yi] ≤ E[max i Xi] . ] 12Theorem 3.11 (Gordan =-=[12, 13]-=-) Let X = (Xij) and Y = (Yij) be Gaussian random vectors in RN1×N2 such that ⎧ ⎪⎨ E[XijXik] ≤ E[YijYik] for all i, j, k Then E[XijXlk] ≥ E[YijYlk] ⎪⎩ E[X 2 ij ] = E[X2 ij ] E[min i max j for all i ̸= ... |