Results 1  10
of
30
Highly undersampled magnetic resonance image reconstruction via homotopic ℓ0minimization
 IEEE Trans. Med. Imaging
, 2009
"... any reduction in scan time offers a number of potential benefits ranging from hightemporalrate observation of physiological processes to improvements in patient comfort. Following recent developments in Compressive Sensing (CS) theory, several authors have demonstrated that certain classes of MR i ..."
Abstract

Cited by 77 (1 self)
 Add to MetaCart
(Show Context)
any reduction in scan time offers a number of potential benefits ranging from hightemporalrate observation of physiological processes to improvements in patient comfort. Following recent developments in Compressive Sensing (CS) theory, several authors have demonstrated that certain classes of MR images which possess sparse representations in some transform domain can be accurately reconstructed from very highly undersampled Kspace data by solving a convex ℓ1minimization problem. Although ℓ1based techniques are extremely powerful, they inherently require a degree of oversampling above the theoretical minimum sampling rate to guarantee that exact reconstruction can be achieved. In this paper, we propose a generalization of the Compressive Sensing paradigm based on homotopic approximation of the ℓ0 quasinorm and show how MR image reconstruction can be pushed even further below the Nyquist limit and significantly closer to the theoretical bound. Following a brief review of standard Compressive Sensing methods and the developed theoretical extensions, several example MRI reconstructions from highly undersampled Kspace data are presented.
2010 Analysis and generalizations of the linearized Bregman method
 SIAM J. Imaging Sci
"... Abstract. This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit ..."
Abstract

Cited by 39 (10 self)
 Add to MetaCart
Abstract. This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem whenever its smooth parameter α is greater than a certain value. The analysis is based on showing that the linearized Bregman algorithm is equivalent to gradient descent applied to a certain dual formulation. This result motivates generalizations of the algorithm enabling the use of gradientbased optimization techniques such as line search, Barzilai–Borwein, limited memory BFGS (LBFGS), nonlinear conjugate gradient, and Nesterov’s methods. In the numerical simulations, the two proposed implementations, one using Barzilai–Borwein steps with nonmonotone line search and the other using LBFGS, gave more accurate solutions in much shorter times than the basic implementation of the linearized Bregman method with a socalled kicking technique. Key words. Bregman, linearized Bregman, compressed sensing, ℓ1minimization, basis pursuit
Compressed Sensing with Cross Validation
 IEEE Transactions on Information Theory
, 2009
"... Compressed Sensing decoding algorithms can efficiently recover an N dimensional realvalued vector x to within a factor of its best kterm approximation by taking m = 2k log N/k measurements y = Φx. If the sparsity or approximate sparsity level of x were known, then this theoretical guarantee would i ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
(Show Context)
Compressed Sensing decoding algorithms can efficiently recover an N dimensional realvalued vector x to within a factor of its best kterm approximation by taking m = 2k log N/k measurements y = Φx. If the sparsity or approximate sparsity level of x were known, then this theoretical guarantee would imply quality assurance of the resulting compressed sensing estimate. However, because the underlying sparsity of the signal x is unknown, the quality of a compressed sensing estimate ˆx using m measurements is not assured. Nevertheless, we demonstrate that sharp bounds on the error x − ˆx  lN can be achieved with almost no ef2 fort. More precisely, we assume that a maximum number of measurements m is preimposed; we reserve 4 log p of the original m measurements and compute a sequence of possible estimates ( ) p ˆxj j=1 to x from the m − 4 log p remaining measurements; the errors x − ˆxj  lN for 2 j = 1,..., p can then be bounded with high probability. As a consequence, numerical upper and lower bounds on the error between x and the best kterm approximation to x can be estimated for p values of k with almost no cost. Our observation has applications outside of compressed sensing as well.
Robust sampling and reconstruction methods for sparse signals in the precense of impulsive noise
, 2010
"... Recent results in compressed sensing show that a sparse or compressible signal can be reconstructed from a few incoherent measurements. Since noise is always present in practical data acquisition systems, sensing, and reconstruction methods are developed assuming a Gaussian (lighttailed) model for ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
(Show Context)
Recent results in compressed sensing show that a sparse or compressible signal can be reconstructed from a few incoherent measurements. Since noise is always present in practical data acquisition systems, sensing, and reconstruction methods are developed assuming a Gaussian (lighttailed) model for the corrupting noise. However, when the underlying signal and/or the measurements are corrupted by impulsive noise, commonly employed linear sampling operators, coupled with current reconstruction algorithms, fail to recover a close approximation of the signal. In this paper, we propose robust methods for sampling and reconstructing sparse signals in the presence of impulsive noise. To solve the problem of impulsive noise embedded in the underlying signal prior the measurement process, we propose a robust nonlinear measurement operator based on the weighed myriad estimator. In addition, we introduce a geometric optimization problem based on 1 minimization employing a Lorentzian norm constraint on the residual error to recover sparse signals from noisy measurements. Analysis of the proposed methods show that in impulsive environments when the noise posses infinite variance we have a finite reconstruction error and furthermore these methods yield successful reconstruction of the desired signal. Simulations demonstrate that the proposed methods significantly outperform commonly employed compressed sensing sampling and reconstruction techniques in impulsive environments, while providing comparable performance in less demanding, lighttailed environments.
A compressive sensing data acquisition and imaging method for stepped frequency GPRs
 IEEE Trans. Geosci. Remote Sens
, 2009
"... Abstract—A novel data acquisition and imaging method is presented for steppedfrequency continuouswave ground penetrating radars (SFCW GPRs). It is shown that if the target space is sparse, i.e., a small number of point like targets, it is enough to make measurements at only a small number of ran ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
Abstract—A novel data acquisition and imaging method is presented for steppedfrequency continuouswave ground penetrating radars (SFCW GPRs). It is shown that if the target space is sparse, i.e., a small number of point like targets, it is enough to make measurements at only a small number of random frequencies to construct an image of the target space by solving a convex optimization problem which enforces sparsity through minimization. This measurement strategy greatly reduces the data acquisition time at the expense of higher computational costs. Imaging results for both simulated and experimental GPR data exhibit less clutter than the standard migration methods and are robust to noise and random spatial sampling. The images also have increased resolution where closely spaced targets that cannot be resolved by the standard migration methods can be resolved by the proposed method. Index Terms—Compressive sensing, minimization, ground penetrating radar (GPR), sparsity, stepped frequency systems, subsurface imaging. I.
Instance optimal decoding by thresholding in compressed sensing
"... Compressed Sensing seeks to capture a discrete signal x ∈ IR N with a small number n of linear measurements. The information captured about x from such measurements is given by the vector y = Φx ∈ IR n where Φ is an n × N matrix. The best matrices, from the viewpoint of capturing sparse or compressi ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Compressed Sensing seeks to capture a discrete signal x ∈ IR N with a small number n of linear measurements. The information captured about x from such measurements is given by the vector y = Φx ∈ IR n where Φ is an n × N matrix. The best matrices, from the viewpoint of capturing sparse or compressible signals, are generated by random processes, e.g. their entries are given by i.i.d. Bernoulli or Gaussian random variables. The information y holds about x is extracted by a decoder ∆ mapping IR n into IR N. Typical decoders are based on ℓ1minimization and greedy pursuit. The present paper studies the performance of decoders based on thresholding. For quite general random families of matrices Φ, decoders ∆ are constructed which are instanceoptimal in probability by which we mean the following. If x is any vector in IR N, then with high probability applying ∆ to y = Φx gives a vector ¯x: = ∆(y) such that ‖x−¯x ‖ ≤ C0σk(x)ℓ2 for all k ≤ an / log N provided a is sufficiently small (depending on the probability of failure). Here σk(x)ℓ2 is the error that results when x is approximated by the k sparse vector which equals x in its k largest coordinates and is otherwise zero. It is also shown that results of this type continue to hold even if the measurement vector y is corrupted by additive noise: y = Φx + e where e is some noise vector. In this case σk(x)ℓ2 is replaced by σk(x)ℓ2 + ‖e‖ℓ2.
CMOS COMPRESSED IMAGING BY RANDOM CONVOLUTION
"... We present a CMOS imager with builtin capability to perform Compressed Sensing coding by Random Convolution. It is achieved by a shift register set in a pseudorandom configuration. It acts as a convolutive filter on the imager focal plane, the current issued from each CMOS pixel undergoing a pseud ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
We present a CMOS imager with builtin capability to perform Compressed Sensing coding by Random Convolution. It is achieved by a shift register set in a pseudorandom configuration. It acts as a convolutive filter on the imager focal plane, the current issued from each CMOS pixel undergoing a pseudorandom redirection controlled by each component of the filter sequence. A pseudorandom triggering of the ADC reading is finally applied to complete the acquisition model. The feasibility of the imager and its robustness under noise and nonlinearities have been confirmed by computer simulations, as well as the reconstruction tools supporting the Compressed Sensing theory.
On the Relation between Sparse Reconstruction and Parameter Estimation with Model Order Selection
"... We examine the relationship between the classic problem of continuous parametric modeling and sparse reconstruction. Sparse reconstruction techniques have been successfully applied to a number of problems in signal and image modeling and reconstruction. These techniques apply to applications in whic ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
We examine the relationship between the classic problem of continuous parametric modeling and sparse reconstruction. Sparse reconstruction techniques have been successfully applied to a number of problems in signal and image modeling and reconstruction. These techniques apply to applications in which a measurement can be described as a linear combination of a small number of discrete additive components. In the sparse reconstruction context, order selection and parameter estimation is accomplished through a regularized leastsquares algorithm that prefers sparse solutions. Sparse reconstruction is closely related to compressed sensing, and recent results in the compressed sensing literature have provided fast reconstruction algorithms with guaranteed performance bounds for problems with certain structure. Parameter estimation problems, by contrast, typically involve a model in which the signal is composed of a small but unknown number of parameterized functions; model estimation entails both model order selection and parameter estimation, the latter usually involving a nonlinear optimization problem. In this paper we show an explicit connection between the two problem formulations and demonstrate how sparse reconstruction may be used to solve traditional continuous parameter estimation problems and unknown model order estimation problems. We further demonstrate that the structural assumptions used in compressive sensing—namely the Restricted Isometry Property—to guarantee reconstruction performance are not satisfied in the parameter estimation
On the relation between sparse sampling and parametric estimation
 in IEEE 13th DSP workshop and 5th Sig. Proc. Workshop 2009 (DSP/SPE 2009
"... We consider the relationship between parameter estimation of an additive model and sparse inversion of an underdetermined matrix (dictionary) in a linear system. The dictionary is constructed by sampling parameters of the additive model. Parameters and model order are estimated using regularized lea ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
We consider the relationship between parameter estimation of an additive model and sparse inversion of an underdetermined matrix (dictionary) in a linear system. The dictionary is constructed by sampling parameters of the additive model. Parameters and model order are estimated using regularized leastsquares inversion. We investigate equispaced and Fisher information inspired parameter sampling methods for dictionary construction, and present an example quantifying parameter estimation error performance for the different sampling methods. These results indicate that estimation performance is degraded by sampling the parameter space either too finely or too coarsely. Index Terms — Parameter estimation, Model order estimation, Sparse reconstruction
Cross validation in compressed sensing via the Johnson Lindenstrauss lemma, preprint
, 2008
"... Compressed Sensing decoding algorithms aim to reconstruct an unknown N dimensional vector x from m < N given measurements y = Φx, with an assumed sparsity constraint on x. All algorithms presently are iterative in nature, producing a sequence of approximations (s1, s2,...) until a certain algorit ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Compressed Sensing decoding algorithms aim to reconstruct an unknown N dimensional vector x from m < N given measurements y = Φx, with an assumed sparsity constraint on x. All algorithms presently are iterative in nature, producing a sequence of approximations (s1, s2,...) until a certain algorithmspecific stopping criterion is reached at iteration j ∗ , at which point the estimate ˆx = sj ∗ is returned as an approximation to x. In many algorithms, the error x − ˆx  lN of the approximation is bounded above by a function of the error between 2 x and the best kterm approximation to x. However, as x is unknown, such estimates provide no numerical bounds on the error. In this paper, we demonstrate that tight numerical upper and lower bounds on the error x − sj  lN for j ≤ p iterations of a compressed sensing 2 decoding algorithm are attainable with little effort. More precisely, we assume a maximum iteration length of p is preimposed; we reserve 4 log p of the original m measurements and compute the sj from the m − 4 log(p) remaining measurements; the errors x − sj  lN, for 2 j = 1,..., p can then be bounded with high probability. As a consequence, a numerical upper bound on the error between x and the best kterm approximation to x can be estimated with almost no cost. Our observation has applications outside of Compressed Sensing as well. 1