Results 1  10
of
424
For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1norm Solution is also the Sparsest Solution
 Comm. Pure Appl. Math
, 2004
"... We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that ..."
Abstract

Cited by 568 (10 self)
 Add to MetaCart
We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so
A Scaling Algorithm to Equilibrate Both Rows and Columns Norms in Matrices
, 2001
"... We present an iterative procedure which asymptotically scales the infinity norm of both rows and columns in a matrix to 1. This scaling strategy exhibits some optimality properties and additionally preserves symmetry. The algorithm also shows fast linear convergence with an asymptotic rate of 1/2 ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
We present an iterative procedure which asymptotically scales the infinity norm of both rows and columns in a matrix to 1. This scaling strategy exhibits some optimality properties and additionally preserves symmetry. The algorithm also shows fast linear convergence with an asymptotic rate of 1
The Dantzig selector: statistical estimation when p is much larger than n
, 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract

Cited by 879 (14 self)
 Add to MetaCart
, where r is the residual vector y − A˜x and t is a positive scalar. We show that if A obeys a uniform uncertainty principle (with unitnormed columns) and if the true parameter vector x is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability
Stable signal recovery from incomplete and inaccurate measurements,”
 Comm. Pure Appl. Math.,
, 2006
"... Abstract Suppose we wish to recover a vector x 0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax 0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x 0 accurately based on the data y? To r ..."
Abstract

Cited by 1397 (38 self)
 Add to MetaCart
? To recover x 0 , we consider the solution x to the 1 regularization problem where is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unitnormed columns) and if the vector x 0 is sufficiently sparse, then the solution is within the noise level As a first example
Ranksparsity incoherence for matrix decomposition
, 2010
"... Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown lowrank matrix. Our goal is to decompose the given matrix into its sparse and lowrank components. Such a problem arises in a number of applications in model and system identification, and is intractable ..."
Abstract

Cited by 230 (21 self)
 Add to MetaCart
to solve in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components, by minimizing a linear combination of the ℓ1 norm and the nuclear norm of the components. We develop a notion of ranksparsity incoherence, expressed as an uncertainty
Fast Monte Carlo Algorithms for Matrices II: Computing a LowRank Approximation to a Matrix
 SIAM JOURNAL ON COMPUTING
, 2004
"... ... matrix A. It is often of interest to find a lowrank approximation to A, i.e., an approximation D to the matrix A of rank not greater than a specified rank k, where k is much smaller than m and n. Methods such as the Singular Value Decomposition (SVD) may be used to find an approximation to A ..."
Abstract

Cited by 216 (20 self)
 Add to MetaCart
description of a lowrank approximation D to A, and which are qualitatively faster than the SVD. Both algorithms have provable bounds for the error matrix A D . For any matrix X , let kXk and kXk 2 denote its Frobenius norm and its spectral norm, respectively. In the rst algorithm, c = O(1
On the Nyström Method for Approximating a Gram Matrix for Improved KernelBased Learning
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... A problem for many kernelbased methods is that the amount of computation required to find the solution scales as O(n³), where n is the number of training examples. We develop and analyze an algorithm to compute an easilyinterpretable lowrank approximation to an nn Gram matrix G such that compu ..."
Abstract

Cited by 188 (11 self)
 Add to MetaCart
and the corresponding c rows of G. An important aspect of the algorithm is the probability distribution used to randomly sample the columns; we will use a judiciouslychosen and datadependent nonuniform probability distribution. Let F denote the spectral norm and the Frobenius norm, respectively, of a matrix
Systematic design of unitary spacetime constellations
 IEEE TRANS. INFORM. THEORY
, 2000
"... We propose a systematic method for creating constellations of unitary space–time signals for multipleantenna communication links. Unitary space–time signals, which are orthonormal in time across the antennas, have been shown to be welltailored to a Rayleigh fading channel where neither the transm ..."
Abstract

Cited by 201 (10 self)
 Add to MetaCart
the familiar maximumEuclideandistance norm. Our construction begins with the first signal in the constellation—an oblong complexvalued matrix whose columns are orthonormal—and systematically produces the remaining signals by successively rotating this signal in a highdimensional complex space
THE CORIC COLUMN: A REPRESENTATION OF THE NORM OF VIRTUE
"... At the previous conference my purpose was to give a rhetorical interpretation to the sacred geometry of the west façade of the Parthenon, the bestknown of all Greek tempies, the apogee of Hellenic architecture, built by architects Ichtinus and Callicrates for Perieles, the client, from 44732 BC, o ..."
Abstract
 Add to MetaCart
At the previous conference my purpose was to give a rhetorical interpretation to the sacred geometry of the west façade of the Parthenon, the bestknown of all Greek tempies, the apogee of Hellenic architecture, built by architects Ichtinus and Callicrates for Perieles, the client, from 44732 BC, on the Acropolis in Athens.1 In the present paper I wil! take as my point of departure the analysis of the diagram of the west (or east) façade of the Parthensuperior "high" ldeas representation "Iow" inferior Figure 1 demigods/underworld
For most large underdetermined systems of equations, the minimal l1norm nearsolution approximates the sparsest nearsolution
 Comm. Pure Appl. Math
, 2004
"... We consider inexact linear equations y ≈ Φα where y is a given vector in R n, Φ is a given n by m matrix, and we wish to find an α0,ɛ which is sparse and gives an approximate solution, obeying �y − Φα0,ɛ�2 ≤ ɛ. In general this requires combinatorial optimization and so is considered intractable. On ..."
Abstract

Cited by 122 (1 self)
 Add to MetaCart
. On the other hand, the ℓ 1 minimization problem min �α�1 subject to �y − Φα�2 ≤ ɛ, is convex, and is considered tractable. We show that for most Φ the solution ˆα1,ɛ = ˆα1,ɛ(y, Φ) of this problem is quite generally a good approximation for ˆα0,ɛ. We suppose that the columns of Φ are normalized to unit ℓ 2 norm
Results 1  10
of
424