Results 1  10
of
107
Generalized sampling and infinitedimensional compressed sensing
"... We introduce and analyze an abstract framework, and corresponding method, for compressed sensing in infinite dimensions. This extends the existing theory from signals in finitedimensional vectors spaces to the case of separable Hilbert spaces. We explain why such a new theory is necessary, and demo ..."
Abstract

Cited by 33 (20 self)
 Add to MetaCart
We introduce and analyze an abstract framework, and corresponding method, for compressed sensing in infinite dimensions. This extends the existing theory from signals in finitedimensional vectors spaces to the case of separable Hilbert spaces. We explain why such a new theory is necessary, and demonstrate that existing finitedimensional techniques are illsuited for solving a number of important problems. This work stems from recent developments in generalized sampling theorems for classical (Nyquist rate) sampling that allows for reconstructions in arbitrary bases. The main conclusion of this paper is that one can extend these ideas to allow for significant subsampling of sparse or compressible signals. The key to these developments is the introduction of two new concepts in sampling theory, the stable sampling rate and the balancing property, which specify how to appropriately discretize the fundamentally infinitedimensional reconstruction problem.
Exploiting Statistical Dependencies in Sparse Representations for Signal Recovery
, 2012
"... Signal modeling lies at the core of numerous signal and image processing applications. A recent approach that has drawn considerable attention is sparse representation modeling, in which the signal is assumed to be generated as a combination of a few atoms from a given dictionary. In this work we c ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
Signal modeling lies at the core of numerous signal and image processing applications. A recent approach that has drawn considerable attention is sparse representation modeling, in which the signal is assumed to be generated as a combination of a few atoms from a given dictionary. In this work we consider a Bayesian setting and go beyond the classic assumption of independence between the atoms. The main goal of this paper is to introduce a statistical model that takes such dependencies into account and show how this model can be used for sparse signal recovery. We follow the suggestion of two recent works and assume that the sparsity pattern is modeled by a Boltzmann machine, a commonly used graphical model. For general dependency models, exact MAP and MMSE estimation of the sparse representation becomes computationally complex. To simplify the computations, we propose greedy approximations of the MAP and MMSE estimators. We then consider a special case in which exact MAP is feasible, by assuming that the dictionary is unitary and the dependency model corresponds to a certain sparse graph. Exploiting this structure, we develop an efficient message passing algorithm that recovers the underlying signal. When the model parameters defining the underlying graph are unknown, we suggest an algorithm that learns these parameters directly from the data, leading to an iterative scheme for adaptive sparse signal recovery. The effectiveness of our approach is demonstrated on reallife signals patches of natural images where we compare the denoising performance to that of previous recovery methods that do not exploit the statistical dependencies.
MULTISCALE MINING OF FMRI DATA WITH HIERARCHICAL STRUCTURED SPARSITY
, 2011
"... Abstract. Reverse inference, or “brain reading”, is a recent paradigm for analyzing functional magnetic resonance imaging (fMRI) data, based on pattern recognition and statistical learning. By predicting some cognitive variables related to brain activation maps, this approach aims at decoding brain ..."
Abstract

Cited by 20 (5 self)
 Add to MetaCart
(Show Context)
Abstract. Reverse inference, or “brain reading”, is a recent paradigm for analyzing functional magnetic resonance imaging (fMRI) data, based on pattern recognition and statistical learning. By predicting some cognitive variables related to brain activation maps, this approach aims at decoding brain activity. Reverse inference takes into account the multivariate information between voxels and is currently the only way to assess how precisely some cognitive information is encoded by the activity of neural populations within the whole brain. However, it relies on a prediction function that is plagued by the curse of dimensionality, since there are far more features than samples, i.e., more voxels than fMRI volumes. To address this problem, different methods have been proposed, such as, among others, univariate feature selection, feature agglomeration and regularization techniques. In this paper, we consider a sparse hierarchical structured regularization. Specifically, the penalization we use is constructed from a tree that is obtained by spatiallyconstrained agglomerative clustering. This approach encodes the spatial structure of the data at different scales into the regularization, which makes the overall prediction procedure more robust to intersubject variability. The regularization used induces the selection of spatially coherent predictive brain regions simultaneously at different scales. We test our algorithm on real data acquired to study the mental representation of objects, and we show that the proposed algorithm not only delineates meaningful brain regions but yields as well better prediction accuracy than reference methods.
Phase Retrieval with Application to Optical Imaging
, 2015
"... The problem of phase retrieval, i.e., the recovery of a function given the magnitude of its ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
The problem of phase retrieval, i.e., the recovery of a function given the magnitude of its
Sparsitybased singleshot subwavelength coherent diffractive imaging. Nature Materials
"... Coherent Diffractive Imaging (CDI) is an algorithmic imaging technique where intricate features are reconstructed from measurements of the freely diffracting intensity pattern. An important goal of such lensless imaging methods is to study the structure of molecules that cannot be crystallized. Idea ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
Coherent Diffractive Imaging (CDI) is an algorithmic imaging technique where intricate features are reconstructed from measurements of the freely diffracting intensity pattern. An important goal of such lensless imaging methods is to study the structure of molecules that cannot be crystallized. Ideally, one would want to perform CDI at the highest achievable spatial resolution and in a singleshot measurement such that it could be applied to imaging of ultrafast events. However, the resolution of current CDI techniques is limited by the diffraction limit, hence they cannot resolve features smaller than one half the wavelength of the illuminating light. Here, we present sparsitybased singleshot subwavelength resolution CDI: algorithmic reconstruction of subwavelength features from farfield intensity patterns, at a resolution several times better than the diffraction limit. This work paves the way for subwavelength CDI at ultrafast rates, and it can considerably improve the CDI resolution with Xray freeelectron lasers and high harmonics. Improving the resolution in imaging and microscopy has been a driving force in the natural sciences for centuries. Fundamentally, the propagation of an electromagnetic field in a linear medium can be fully described through the propagation of its eigenmodes (a complete and orthogonal set of functions that do not exchange power during propagation). In homogeneous,
SubNyquist Sampling  Bridging theory and practice
, 2011
"... Signal processing methods have changed substantially over the last several decades. In modern applications, an increasing number of functions is being pushed forward to sophisticated software algorithms, leaving only delicate finely tuned tasks for the circuit level. Sampling theory, the gate to th ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
Signal processing methods have changed substantially over the last several decades. In modern applications, an increasing number of functions is being pushed forward to sophisticated software algorithms, leaving only delicate finely tuned tasks for the circuit level. Sampling theory, the gate to the digital world, is the key enabling this revolution, encompassing all aspects related to the conversion of continuoustime signals to discrete streams of numbers. The famous ShannonNyquist theorem has become a landmark: a mathematical statement that has had one of the most profound impacts on industrial development of digital signal processing (DSP) systems. Over the years, theory and practice in the field of sampling have developed in parallel routes. Contributions by many research groups suggest a multitude of methods, other than uniform sampling, to acquire analog signals [1]–[6]. The math has deepened, leading to abstract signal spaces and innovative sampling techniques. Within generalized sampling theory, bandlimited signals have no special preference, other than historic. At the same time, the market adhered to the Nyquist paradigm;
Breaking the coherence barrier: asymptotic incoherence and asymptotic sparsity in compressed sensing
, 2013
"... In this paper we bridge the substantial gap between existing compressed sensing theory and its current use in realworld applications. 1 We do so by introducing a new mathematical framework for overcoming the socalled coherence ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
(Show Context)
In this paper we bridge the substantial gap between existing compressed sensing theory and its current use in realworld applications. 1 We do so by introducing a new mathematical framework for overcoming the socalled coherence
SubNyquist radar via Doppler focusing
 IEEE Transactions on Signal Processing
"... Abstract—We investigate the problem of a monostatic pulseDoppler radar transceiver trying to detect targets sparsely populated in the radar’s unambiguous timefrequency region. Several past works employ compressed sensing (CS) algorithms to this type of problem but either do not address sample rate ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
(Show Context)
Abstract—We investigate the problem of a monostatic pulseDoppler radar transceiver trying to detect targets sparsely populated in the radar’s unambiguous timefrequency region. Several past works employ compressed sensing (CS) algorithms to this type of problem but either do not address sample rate reduction, impose constraints on the radar transmitter, propose CS recovery methods with prohibitive dictionary size, or perform poorly in noisy conditions. Here, we describe a subNyquist sampling and recovery approach called Doppler focusing, which addresses all of these problems: it performs low rate sampling and digital processing, imposes no restrictions on the transmitter, and uses a CS dictionary with size, which does not increase with increasing number of pulses. Furthermore, in the presence of noise, Doppler focusing enjoys a signaltonoise ratio (SNR) improvement, which scales linearly with, obtaining good detection performance even at SNR as low as 25 dB. The recovery is based on the Xampling framework, which allows reduction of the number of samples needed to accurately represent the signal, directly in the analogtodigital conversion process. After sampling, the entire digital recovery process is performed on the low rate samples without having to return to the Nyquist rate. Finally, our approach can be implemented in hardware using a previously suggested Xampling radar prototype. Index Terms—Compressed sensing, rate of innovation, radar, sparse recovery, subNyquist sampling, delayDoppler estimation. I.
A subNyquist radar prototype: Hardware and algorithms
 IEEE Transactions on Aerospace and Electronic Systems, special issue on Compressed Sensing for Radar, Aug. 2012
"... Traditional radar sensing typically employs matched filtering between the received signal and the shape of the transmitted pulse. Matched filtering (MF) is conventionally carried out digitally, after sampling the received analog signals. Here, principles from classic sampling theory are generally em ..."
Abstract

Cited by 9 (7 self)
 Add to MetaCart
Traditional radar sensing typically employs matched filtering between the received signal and the shape of the transmitted pulse. Matched filtering (MF) is conventionally carried out digitally, after sampling the received analog signals. Here, principles from classic sampling theory are generally employed, requiring that the received signals be sampled at twice their baseband bandwidth. The resulting sampling rates necessary for correlationbased radar systems become quite high, as growing demands for target distinction capability and spatial resolution stretch the bandwidth of the transmitted pulse. The large amounts of sampled data also necessitate vast memory capacity. In addition, realtime data processing typically results in high power consumption. Recently, new approaches for radar sensing and estimation were introduced, based on the finite rate of innovation (FRI) and Xampling frameworks. Exploiting the parametric nature of radar signals, these techniques allow significant reduction in sampling rate, implying potential power savings, while maintaining the system’s estimation capabilities at sufficiently high signaltonoise ratios (SNRs). Here we present for the first time a design and implementation of an Xamplingbased hardware prototype that allows sampling of radar signals at rates much lower than Nyquist. We demonstrate by realtime analog experiments that our system is able to maintain reasonable recovery capabilities, while sampling radar signals that require sampling at a rate of about 30 MHz at a total rate of 1 MHz.