Results 1  10
of
466
For Most Large Underdetermined Systems of Linear Equations the Minimal ℓ1norm Solution is also the Sparsest Solution
 Comm. Pure Appl. Math
, 2004
"... We consider linear equations y = Φα where y is a given vector in R n, Φ is a given n by m matrix with n < m ≤ An, and we wish to solve for α ∈ R m. We suppose that the columns of Φ are normalized to unit ℓ 2 norm 1 and we place uniform measure on such Φ. We prove the existence of ρ = ρ(A) so that ..."
Abstract

Cited by 568 (10 self)
 Add to MetaCart
. In contrast, heuristic attempts to sparsely solve such systems – greedy algorithms and thresholding – perform poorly in this challenging setting. The techniques include the use of random proportional embeddings and almostspherical sections in Banach space theory, and deviation bounds for the eigenvalues
Underdetermined systems of equations
, 2013
"... yk = 〈x0, φk〉, k = 1,...,M or y = Φx0 Fewer measurements than degrees of freedom, M N y! x0= Treat acquisition as a linear inverse problem Compressive Sampling: for sparse x0, we can “invert ” incoherent Φ Sparse recovery Given M linear measurements of an Ssparse signal y = Φx0 + noise when can we ..."
Abstract
 Add to MetaCart
yk = 〈x0, φk〉, k = 1,...,M or y = Φx0 Fewer measurements than degrees of freedom, M N y! x0= Treat acquisition as a linear inverse problem Compressive Sampling: for sparse x0, we can “invert ” incoherent Φ Sparse recovery Given M linear measurements of an Ssparse signal y = Φx0 + noise when can we recover x0? Sparse recovery Given M linear measurements of an Ssparse signal y = Φx0 + noise when can we recover x0? Key condition: matrix Φ is a restricted isometry: (1 − δ)‖x‖22 ≤ ‖Φx‖22 ≤ (1 + δ)‖x‖22 for all 2Ssparse x [Candes and Tao ’06] Random matrices Example: Φi,j ∼ ±1 w / prob 1/2, iid Can recover Ssparse x0 from M & S · log(N/S) measurements using convex programming, greedy algorithms,... Random matrices Example: Φi,j ∼ ±1 w / prob 1/2, iid Can recover Ssparse x0 from M & S · log(N/S) measurements using `1 minimization: min ‖x‖1 subject to Φx = y Random matrices Example: Φi,j ∼ ±1 w / prob 1/2, iid Can recover Ssparse (in basis Ψ) x0 = Ψα0 from M & S · log(N/S) measurements using `1 minimization: min ‖α‖1 subject to ΦΨα = y Random matrices Example: Φi,j ∼ ±1 w / prob 1/2, iid Can stably recover ≈ Ssparse (in basis Ψ) x0 = Ψα0 from M & S · log(N/S) noisy measurements using `1 minimization: min λ‖α‖1 + 1 2 ‖ΦΨα − y‖22 for appropriate λ. Sparsity Decompose signal/image x(t) in orthobasis {ψi(t)}i x(t) = i αiψi(t) wavelet transform zoom x0 {αi}i Wavelet approximation Take 1 % of largest coefficients, set the rest to zero (adaptive) original approximated rel. error = 0.031 Integrating compression and sensing Goal: a dynamical framework for sparse recovery Given y and Φ, solve min x λ‖x‖1 + 1
Improved error bounds for underdetermined system solvers
 SIAM J. Matrix Anal. Appl
, 1993
"... The minimal 2norm solution to an underdetermined system Ax = b of full rank can be computed using a QR factorization of A T in two di erent ways. One requires storage and reuse of the orthogonal matrix Q while the method of seminormal equations does not. Existing error analyses show that both me ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
The minimal 2norm solution to an underdetermined system Ax = b of full rank can be computed using a QR factorization of A T in two di erent ways. One requires storage and reuse of the orthogonal matrix Q while the method of seminormal equations does not. Existing error analyses show that both
Sparse Signal Recovery and Dynamic Update of the Underdetermined System
"... Abstract—Sparse signal priors help in a variety of modern signal processing tasks. In many cases, a sparse signal needs to be recovered from an underdetermined system of equations. For instance, sparse approximation of a signal with an overcomplete dictionary or reconstruction of a sparse signal fro ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract—Sparse signal priors help in a variety of modern signal processing tasks. In many cases, a sparse signal needs to be recovered from an underdetermined system of equations. For instance, sparse approximation of a signal with an overcomplete dictionary or reconstruction of a sparse signal
LEASTCHANGE SECANT UPDATE METHODS FOR UNDERDETERMINED SYSTEMS*
"... Abstract. Leastchange secant updates for nonsquare matrices have been addressed recently in [6]. Here the use of these updates in iterative procedures for the numerical solution of underderetrained systems is considered. The model method is the normal flow algorithm used in homotopy or continuatio ..."
Abstract
 Add to MetaCart
methods in the usual case of an equal number of equations and unknowns. This in turn gives a local convergence analysis for augmented Jacobian algorithms which use leastchange secant updates. In conclusion, the results of some numerical experiments are given. Key words, underdetermined systems, least
On the Uniqueness of Nonnegative Sparse Solutions to Underdetermined Systems of Equations
, 2008
"... An underdetermined linear system of equations Ax = b with nonnegativity constraint x 0 is considered. It is shown that for matrices A with a rowspan intersecting the positive orthant, if this problem admits a sufficiently sparse solution, it is necessarily unique. The bound on the required sparsity ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
An underdetermined linear system of equations Ax = b with nonnegativity constraint x 0 is considered. It is shown that for matrices A with a rowspan intersecting the positive orthant, if this problem admits a sufficiently sparse solution, it is necessarily unique. The bound on the required
Metrics and norms used for obtaining sparse solutions to underdetermined Systems of Linear Equations
, 2014
"... This paper focuses on defining a measure, appropriate for obtaining optimally sparse solutions to underdetermined systems of linear equations.1 The general idea is the extension of metrics in ndimensional spaces via the Cartesian product of metric spaces. 1 ..."
Abstract
 Add to MetaCart
This paper focuses on defining a measure, appropriate for obtaining optimally sparse solutions to underdetermined systems of linear equations.1 The general idea is the extension of metrics in ndimensional spaces via the Cartesian product of metric spaces. 1
Which is Implicitly Defined By an Underdetermined System of Equations.
"... This article gives an introduction to the main ideas of numerical path following and presents some advances in the subject regarding adaptations, applications, analysis of efficiency, and complexity. Both theoretical and implementing aspects of the predictor corrector path following methods are thor ..."
Abstract
 Add to MetaCart
This article gives an introduction to the main ideas of numerical path following and presents some advances in the subject regarding adaptations, applications, analysis of efficiency, and complexity. Both theoretical and implementing aspects of the predictor corrector path following methods are thoroughly discussed. Piecewise linear methods are also studied. At the end, a list for some available software related to path following is provided.
Results 1  10
of
466