• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 424
Next 10 →

For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1-norm Solution is also the Sparsest Solution

by David L. Donoho - Comm. Pure Appl. Math , 2004
"... We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that ..."
Abstract - Cited by 568 (10 self) - Add to MetaCart
We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so

A Scaling Algorithm to Equilibrate Both Rows and Columns Norms in Matrices

by Daniel Ruiz , 2001
"... We present an iterative procedure which asymptotically scales the infinity norm of both rows and columns in a matrix to 1. This scaling strategy exhibits some optimality properties and additionally preserves symmetry. The algorithm also shows fast linear convergence with an asymptotic rate of 1/2 ..."
Abstract - Cited by 30 (3 self) - Add to MetaCart
We present an iterative procedure which asymptotically scales the infinity norm of both rows and columns in a matrix to 1. This scaling strategy exhibits some optimality properties and additionally preserves symmetry. The algorithm also shows fast linear convergence with an asymptotic rate of 1

The Dantzig selector: statistical estimation when p is much larger than n

by Emmanuel Candes, Terence Tao , 2005
"... In many important statistical applications, the number of variables or parameters p is much larger than the number of observations n. Suppose then that we have observations y = Ax + z, where x ∈ R p is a parameter vector of interest, A is a data matrix with possibly far fewer rows than columns, n ≪ ..."
Abstract - Cited by 879 (14 self) - Add to MetaCart
, where r is the residual vector y − A˜x and t is a positive scalar. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the true parameter vector x is sufficiently sparse (which here roughly guarantees that the model is identifiable), then with very large probability

Stable signal recovery from incomplete and inaccurate measurements,”

by Emmanuel J Candès , Justin K Romberg , Terence Tao - Comm. Pure Appl. Math., , 2006
"... Abstract Suppose we wish to recover a vector x 0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax 0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x 0 accurately based on the data y? To r ..."
Abstract - Cited by 1397 (38 self) - Add to MetaCart
? To recover x 0 , we consider the solution x to the 1 -regularization problem where is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x 0 is sufficiently sparse, then the solution is within the noise level As a first example

Rank-sparsity incoherence for matrix decomposition

by Venkat Chandrasekaran, Sujay Sanghavi, Pablo A. Parrilo, Alan S. Willsky , 2010
"... Suppose we are given a matrix that is formed by adding an unknown sparse matrix to an unknown low-rank matrix. Our goal is to decompose the given matrix into its sparse and low-rank components. Such a problem arises in a number of applications in model and system identification, and is intractable ..."
Abstract - Cited by 230 (21 self) - Add to MetaCart
to solve in general. In this paper we consider a convex optimization formulation to splitting the specified matrix into its components, by minimizing a linear combination of the ℓ1 norm and the nuclear norm of the components. We develop a notion of rank-sparsity incoherence, expressed as an uncertainty

Fast Monte Carlo Algorithms for Matrices II: Computing a Low-Rank Approximation to a Matrix

by Petros Drineas, Ravi Kannan, Michael W. Mahoney - SIAM JOURNAL ON COMPUTING , 2004
"... ... matrix A. It is often of interest to find a low-rank approximation to A, i.e., an approximation D to the matrix A of rank not greater than a specified rank k, where k is much smaller than m and n. Methods such as the Singular Value Decomposition (SVD) may be used to find an approximation to A ..."
Abstract - Cited by 216 (20 self) - Add to MetaCart
description of a low-rank approximation D to A, and which are qualitatively faster than the SVD. Both algorithms have provable bounds for the error matrix A D . For any matrix X , let kXk and kXk 2 denote its Frobenius norm and its spectral norm, respectively. In the rst algorithm, c = O(1

On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning

by Petros Drineas, Michael W. Mahoney - JOURNAL OF MACHINE LEARNING RESEARCH , 2005
"... A problem for many kernel-based methods is that the amount of computation required to find the solution scales as O(n³), where n is the number of training examples. We develop and analyze an algorithm to compute an easily-interpretable low-rank approximation to an nn Gram matrix G such that compu ..."
Abstract - Cited by 188 (11 self) - Add to MetaCart
and the corresponding c rows of G. An important aspect of the algorithm is the probability distribution used to randomly sample the columns; we will use a judiciously-chosen and data-dependent nonuniform probability distribution. Let F denote the spectral norm and the Frobenius norm, respectively, of a matrix

Systematic design of unitary space-time constellations

by Bertrand M. Hochwald, Thomas L. Marzetta, Thomas J. Richardson, Wim Sweldens, Rüdiger Urbanke - IEEE TRANS. INFORM. THEORY , 2000
"... We propose a systematic method for creating constellations of unitary space–time signals for multiple-antenna communication links. Unitary space–time signals, which are orthonormal in time across the antennas, have been shown to be well-tailored to a Rayleigh fading channel where neither the transm ..."
Abstract - Cited by 201 (10 self) - Add to MetaCart
the familiar maximum-Euclidean-distance norm. Our construction begins with the first signal in the constellation—an oblong complex-valued matrix whose columns are orthonormal—and systematically produces the remaining signals by successively rotating this signal in a high-dimensional complex space

THE CORIC COLUMN: A REPRESENTATION OF THE NORM OF VIRTUE

by Ea Maré
"... At the previous conference my purpose was to give a rhetorical interpretation to the sacred geometry of the west façade of the Parthenon, the best-known of all Greek tempies, the apogee of Hellenic architecture, built by architects Ichtinus and Callicrates for Perieles, the client, from 447-32 BC, o ..."
Abstract - Add to MetaCart
At the previous conference my purpose was to give a rhetorical interpretation to the sacred geometry of the west façade of the Parthenon, the best-known of all Greek tempies, the apogee of Hellenic architecture, built by architects Ichtinus and Callicrates for Perieles, the client, from 447-32 BC, on the Acropolis in Athens.1 In the present paper I wil! take as my point of departure the analysis of the diagram of the west (or east) façade of the Parthen-superior "high" ldeas representation "Iow" inferior Figure 1 demigods/underworld

For most large underdetermined systems of equations, the minimal l1-norm near-solution approximates the sparsest near-solution

by David L. Donoho - Comm. Pure Appl. Math , 2004
"... We consider inexact linear equations y ≈ Φα where y is a given vector in R n, Φ is a given n by m matrix, and we wish to find an α0,ɛ which is sparse and gives an approximate solution, obeying �y − Φα0,ɛ�2 ≤ ɛ. In general this requires combinatorial optimization and so is considered intractable. On ..."
Abstract - Cited by 122 (1 self) - Add to MetaCart
. On the other hand, the ℓ 1 minimization problem min �α�1 subject to �y − Φα�2 ≤ ɛ, is convex, and is considered tractable. We show that for most Φ the solution ˆα1,ɛ = ˆα1,ɛ(y, Φ) of this problem is quite generally a good approximation for ˆα0,ɛ. We suppose that the columns of Φ are normalized to unit ℓ 2 norm
Next 10 →
Results 1 - 10 of 424
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University