Results 1  10
of
207,506
Jointly Sparse Vectors
, 2008
"... The rapid developing area of compressed sensing suggests that a sparse vector lying in an arbitrary high dimensional space can be accurately recovered from only a small set of nonadaptive linear measurements. Under appropriate conditions on the measurement matrix, the entire information about the o ..."
Abstract
 Add to MetaCart
The rapid developing area of compressed sensing suggests that a sparse vector lying in an arbitrary high dimensional space can be accurately recovered from only a small set of nonadaptive linear measurements. Under appropriate conditions on the measurement matrix, the entire information about
Compile time sparse vectors in C++
, 1997
"... Templates are a powerful feature of C++. In this article a template library for a special class of sparse vectors is outlined. For these vectors, the sparseness structure of the vectors can be arbitrary but must be known at compile time. In this case it suffices to store only the nonzero elements of ..."
Abstract
 Add to MetaCart
Templates are a powerful feature of C++. In this article a template library for a special class of sparse vectors is outlined. For these vectors, the sparseness structure of the vectors can be arbitrary but must be known at compile time. In this case it suffices to store only the nonzero elements
Sparse Vector Autoregressive Modeling
, 2012
"... The vector autoregressive (VAR) model has been widely used for modeling temporal dependence in a multivariate time series. For large (and even moderate) dimensions, the number of AR coefficients can be prohibitively large, resulting in noisy estimates, unstable predictions and difficulttointerpre ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The vector autoregressive (VAR) model has been widely used for modeling temporal dependence in a multivariate time series. For large (and even moderate) dimensions, the number of AR coefficients can be prohibitively large, resulting in noisy estimates, unstable predictions and difficult
Efficient recovery of jointly sparse vectors
 In NIPS
"... We consider the reconstruction of sparse signals in the multiple measurement vector (MMV) model, in which the signal, represented as a matrix, consists of a set of jointly sparse vectors. MMV is an extension of the single measurement vector (SMV) model employed in standard compressive sensing (CS). ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We consider the reconstruction of sparse signals in the multiple measurement vector (MMV) model, in which the signal, represented as a matrix, consists of a set of jointly sparse vectors. MMV is an extension of the single measurement vector (SMV) model employed in standard compressive sensing (CS
Sparse Bayesian Learning and the Relevance Vector Machine
, 2001
"... This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance vec ..."
Abstract

Cited by 958 (5 self)
 Add to MetaCart
This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance
Reduce and Boost: Recovering Arbitrary Sets of Jointly Sparse Vectors
, 2008
"... The rapid developing area of compressed sensing suggests that a sparse vector lying in a high dimensional space can be accurately and efficiently recovered from only a small set of nonadaptive linear measurements, under appropriate conditions on the measurement matrix. The vector model has been ext ..."
Abstract

Cited by 100 (42 self)
 Add to MetaCart
The rapid developing area of compressed sensing suggests that a sparse vector lying in a high dimensional space can be accurately and efficiently recovered from only a small set of nonadaptive linear measurements, under appropriate conditions on the measurement matrix. The vector model has been
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 649 (21 self)
 Add to MetaCart
An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable
THE REMBO ALGORITHM: ACCELERATED RECOVERY OF JOINTLY SPARSE VECTORS
"... We address the problem of recovering a sparse solution of a linear underdetermined system. Two variants of this problem are studied in the literature. One is the case of a sparse vector with only a few nonzero entries, and the other is of a sparse matrix with few rows nonidentically zero. In eit ..."
Abstract
 Add to MetaCart
We address the problem of recovering a sparse solution of a linear underdetermined system. Two variants of this problem are studied in the literature. One is the case of a sparse vector with only a few nonzero entries, and the other is of a sparse matrix with few rows nonidentically zero
Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ¹ minimization
 PROC. NATL ACAD. SCI. USA 100 2197–202
, 2002
"... Given a ‘dictionary’ D = {dk} of vectors dk, we seek to represent a signal S as a linear combination S = ∑ k γ(k)dk, with scalar coefficients γ(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work considered ..."
Abstract

Cited by 626 (37 self)
 Add to MetaCart
Given a ‘dictionary’ D = {dk} of vectors dk, we seek to represent a signal S as a linear combination S = ∑ k γ(k)dk, with scalar coefficients γ(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work
Results 1  10
of
207,506