• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 737
Next 10 →

POSSIBLE NUMBERS OF NONZERO ENTRIES IN A MATRIX WITH A GIVEN TERM RANK

by Yun Zhang, et al. - ELA , 2014
"... The possible numbers of nonzero entries in a matrix with a given term rank are determined respectively in the generic case, the symmetric case and the symmetric case with 0’s on the main diagonal. The matrices that attain the largest number of nonzero entries are also determined. ..."
Abstract - Add to MetaCart
The possible numbers of nonzero entries in a matrix with a given term rank are determined respectively in the generic case, the symmetric case and the symmetric case with 0’s on the main diagonal. The matrices that attain the largest number of nonzero entries are also determined.

Lower Bound Theory of Nonzero Entries in Solutions of ℓ2-ℓp Minimization

by Xiaojun Chen, Fengmin Xu, Yinyu Ye , 2009
"... Abstract. Recently, variable selection and sparse reconstruction are solved by finding an optimal solution of a minimization model where the objective function is the sum of a datafitting term in ℓ2 norm and a regularization term in ℓp norm (0 < p < 1). In this model, being able to classify ze ..."
Abstract - Cited by 25 (6 self) - Add to MetaCart
zero and nonzero entries in its local solutions is a very important task. However, most algorithms for solving the problem can only provide an approximate local optimal solution, where nonzero entries in the solution cannot be identified theoretically. In this paper, we establish lower bounds

Signal recovery from random measurements via Orthogonal Matching Pursuit

by Joel A. Tropp, Anna C. Gilbert - IEEE TRANS. INFORM. THEORY , 2007
"... This technical report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous ..."
Abstract - Cited by 802 (9 self) - Add to MetaCart
This technical report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over

Maximum exponent of boolean circulant matrices with constant number of nonzero entries in their generating vector

by M. I. Bueno, S. Furtado, N. Sherer , 2009
"... ..."
Abstract - Cited by 2 (2 self) - Add to MetaCart
Abstract not found

From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images

by Alfred M. Bruckstein, David L. Donoho, Michael Elad , 2007
"... A full-rank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity is combin ..."
Abstract - Cited by 427 (36 self) - Add to MetaCart
A full-rank matrix A ∈ IR n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries: can it ever be unique? If so, when? As optimization of sparsity

Sparsity and Incoherence in Compressive Sampling

by Emmanuel Candes, Justin Romberg , 2006
"... We consider the problem of reconstructing a sparse signal x 0 ∈ R n from a limited number of linear measurements. Given m randomly selected samples of Ux 0, where U is an orthonormal matrix, we show that ℓ1 minimization recovers x 0 exactly when the number of measurements exceeds m ≥ Const · µ 2 (U) ..."
Abstract - Cited by 238 (13 self) - Add to MetaCart
) · S · log n, where S is the number of nonzero components in x 0, and µ is the largest entry in U properly normalized: µ(U) = √ n · maxk,j |Uk,j|. The smaller µ, the fewer samples needed. The result holds for “most ” sparse signals x 0 supported on a fixed (but arbitrary) set T. Given T, if the sign of x 0

Parallel Preconditioning with Sparse Approximate Inverses

by Marcus J. Grote, Thomas Huckle - SIAM J. Sci. Comput , 1996
"... A parallel preconditioner is presented for the solution of general sparse linear systems of equations. A sparse approximate inverse is computed explicitly, and then applied as a preconditioner to an iterative method. The computation of the preconditioner is inherently parallel, and its application o ..."
Abstract - Cited by 226 (10 self) - Add to MetaCart
only requires a matrix-vector product. The sparsity pattern of the approximate inverse is not imposed a priori but captured automatically. This keeps the amount of work and the number of nonzero entries in the preconditioner to a minimum. Rigorous bounds on the clustering of the eigenvalues

Signal recovery from partial information via Orthogonal Matching Pursuit

by Joel A. Tropp, Anna C. Gilbert - IEEE TRANS. INFORM. THEORY , 2005
"... This article demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results ..."
Abstract - Cited by 191 (8 self) - Add to MetaCart
This article demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous

Sparse matrices in Matlab: Design and implementation

by John R. Gilbert, Cleve Moler , Robert Schreiber , 1991
"... We have extended the matrix computation language and environment Matlab to include sparse matrix storage and operations. The only change to the outward appearance of the Matlab language is a pair of commands to create full or sparse matrices. Nearly all the operations of Matlab now apply equally to ..."
Abstract - Cited by 164 (22 self) - Add to MetaCart
to full or sparse matrices, without any explicit action by the user. The sparse data structure represents a matrix in space proportional to the number of nonzero entries, and most of the operations compute sparse results in time proportionaltothenumber of arithmetic operations on nonzeros.

Block-sparse signals: Uncertainty relations and efficient recovery

by Yonina C. Eldar, Patrick Kuppinger, Helmut Bölcskei - IEEE TRANS. SIGNAL PROCESS , 2010
"... We consider efficient methods for the recovery of block-sparse signals — i.e., sparse signals that have nonzero entries occurring in clusters—from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which we ..."
Abstract - Cited by 161 (17 self) - Add to MetaCart
We consider efficient methods for the recovery of block-sparse signals — i.e., sparse signals that have nonzero entries occurring in clusters—from an underdetermined system of linear equations. An uncertainty relation for block-sparse signals is derived, based on a block-coherence measure, which
Next 10 →
Results 1 - 10 of 737
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University