#### DMCA

## Robust Recovery of Signals From a Structured Union of Subspaces (2008)

### Cached

### Download Links

Citations: | 214 - 40 self |

### Citations

3611 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...problem underlying the field of compressed sensing (CS), in which the goal is to recover a length N vector x from n < N linear measurements, where x has no more than k non-zero elements in some basis =-=[8]-=-, [9]. Many algorithms have been proposed in the literature in order to recover x in a stable and efficient manner [9]–[12]. A variety of conditions have been developed to ensure that these methods re... |

3534 | A theory of multiresolution signal decomposition: The wavelet representation
- Mallat
- 1989
(Show Context)
Citation Context ...a function of time x = x(t), or can represent a finite-length vector x = x. The most common type of sampling is linear sampling in which yi = 〈si,x〉, 1 ≤ i ≤ n, (1) for a set of functions si ∈ H [4], =-=[31]-=-–[37]. Here 〈x,y〉 denotes the standard inner product on H. For example, if H = L2 is the space of real finite-energy signals then 〈x,y〉 = ∫ ∞ −∞ x(t)y(t)dt. (2)5 When H = R N for some N, 〈x,y〉 = N∑ x... |

2718 | Atomic decomposition by basis pursuit - Chen, Donoho, et al. - 1999 |

2621 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Candes, Romberg, et al.
- 2006
(Show Context)
Citation Context ...em underlying the field of compressed sensing (CS), in which the goal is to recover a length N vector x from n < N linear measurements, where x has no more than k non-zero elements in some basis [8], =-=[9]-=-. Many algorithms have been proposed in the literature in order to recover x in a stable and efficient manner [9]–[12]. A variety of conditions have been developed to ensure that these methods recover... |

1668 | Matching pursuits with time-frequency dictionaries - Mallat, Zhang - 1993 |

1509 | Near-Optimal signal recovery from random projections: Universal encoding strategies
- Candes, Tao
- 2006
(Show Context)
Citation Context ... convex algorithm will recover the underlying block sparse signal. Furthermore, under block RIP, our algorithm is stable in the presence of noise and mismodelling errors. Using ideas similar to [12], =-=[26]-=- we then prove that random matrices satisfy the block RIP with overwhelming probability. Moreover, the probability to satisfy the block RIP is substantially larger than that of satisfying the standard... |

1398 | Decoding by linear programming
- Candes, Tao
- 2005
(Show Context)
Citation Context ...near measurements, where x has no more than k non-zero elements in some basis [8], [9]. Many algorithms have been proposed in the literature in order to recover x in a stable and efficient manner [9]–=-=[12]-=-. A variety of conditions have been developed to ensure that these methods recover x exactly. One of the main tools in this context is the restricted isometry property (RIP) [9], [13], [14]. In partic... |

1389 | Stable signal recovery from incomplete and inaccurate measurements - Candès, Romberg, et al. - 2006 |

962 |
Communication in the presence of noise
- Shannon
- 1949
(Show Context)
Citation Context ...has a rich history dating back to Cauchy. Undoubtedly, the sampling theorem that had the most impact on signal processing and communications is that associated with Whittaker, Kotelńikov, and Shannon =-=[1]-=-, [2]. Their famous result is that a bandlimited function x(t) can be recovered from its uniform samples as long as the sampling rate exceeds the Nyquist rate, corresponding to twice the highest frequ... |

769 | CoSaMP: Iterative signal recovery from incomplete and inaccurate samples
- Needell, Tropp
- 2008
(Show Context)
Citation Context ...erns. Therefore, the authors consider two types of sparse vectors: block sparsity as treated here, and a wavelet tree model. For these settings, they generalize two29 known greedy algorithms: CoSaMP =-=[47]-=- and iterative hard thresholding (IHT) [44]. These results emphasize our claim that theoretical questions of uniqueness and stable representation can be studied for arbitrary unions as in [23]. Howeve... |

696 | Model-based compressive sensing
- Baraniuk, Cevher, et al.
- 1982
(Show Context)
Citation Context ...g functions. In our future work, we intend to combine these results with those in the current paper in order to develop a more general theory for recovery from a union of subspaces. A recent preprint =-=[46]-=- that was posted online after the submission of this paper proposes a new framework called model-based compressive sensing (MCS). The MCS approach assumes a vector signal model in which only certain p... |

694 |
The restricted isometry property and its implications for compressed sensing. Compte Rendus de l’Academie des Sciences
- Candès
(Show Context)
Citation Context ...manner [9]–[12]. A variety of conditions have been developed to ensure that these methods recover x exactly. One of the main tools in this context is the restricted isometry property (RIP) [9], [13], =-=[14]-=-. In particular, it can be shown that if the measurement matrix satisfies the RIP then x can be recovered by solving an ℓ1 minimization algorithm. Another special case of a union of subspaces is the s... |

629 | A simple proof of the restricted isometry property for random matrices
- Baraniuk, Davenport, et al.
(Show Context)
Citation Context ... question is how many samples are needed roughly in order to guarantee stable recovery. This question is addressed in the following proposition, which quotes a result from [44] based on the proofs of =-=[45]-=-; we rephrase the result to match our notation. Proposition 4 ( [44, Theorem 3.3]): Consider the setting of Proposition 3, namely a random Gaussian matrix D of size n × N and block sparse signals over... |

360 | Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit
- Tropp
- 2006
(Show Context)
Citation Context ...ts of Theorems 1 and 2 can be specified to this problem. Recovery algorithms for MMV using convex optimization programs were studied in [28], [30] and several greedy algorithms were proposed in [27], =-=[29]-=-. Specifically, in [27]–[30] the authors study a class of optimization programs, which we refer to as M-BP: M-BP(ℓq): L∑ min ‖X i ‖ p q s.t. Y = MX, (63) i=1 where Xi is the ith row of X. The choice p... |

341 | Sampling signals with finite rate of innovation - Vetterli, Marziliano, et al. - 2002 |

339 |
Certain Topics in Telegraph Transmission Theory
- Nyquist
- 1928
(Show Context)
Citation Context ... result is that a bandlimited function x(t) can be recovered from its uniform samples as long as the sampling rate exceeds the Nyquist rate, corresponding to twice the highest frequency of the signal =-=[3]-=-. More recently, this basic theorem has been extended to include more general classes of signal spaces. In particular, it Department of Electrical Engineering, Technion—Israel Institute of Technology,... |

334 | Sampling—50 years after shannon
- Unser
- 2000
(Show Context)
Citation Context ... 216715).2 can be shown that under mild technical conditions, a signal x lying in a given subspace can be recovered exactly from its linear generalized samples using a series of filtering operations =-=[4]-=-–[7]. Recently, there has been growing interest in nonlinear signal models in which the unknown x does not necessarily lie in a subspace. In order to ensure recovery from the samples, some underlying ... |

325 | Iterative hard thresholding for compressed sensing
- Blumensath, Davies
- 2009
(Show Context)
Citation Context ...latively small. An important question is how many samples are needed roughly in order to guarantee stable recovery. This question is addressed in the following proposition, which quotes a result from =-=[46]-=- based on the proofs of [47]; we rephrase the result to match our notation. Proposition 4 ([46, Theorem 3.3]]): Consider the setting of Proposition 3, namely, a random Gaussian matrix of size and bloc... |

264 | Sparse solutions to linear inverse problems with multiple measurement vectors
- Cotter, Rao, et al.
- 2005
(Show Context)
Citation Context ... the block-sparse model is the multiple measurement vector (MMV) problem, in which there is a set of unknown vectors that share a joint sparsity pattern. MMV recovery algorithms were studied in [20], =-=[29]-=-–[32]. Equivalence results based on mutual coherence for a mixed program were derived in [30]. These results turn out to be the same as that obtained from a single measurement problem. This is in cont... |

229 | A sparse signal reconstruction perspective for source localization with sensor arrays
- Malioutov, Cetin, et al.
- 2005
(Show Context)
Citation Context ...q s.t. Y = MX, (63) i=1 where Xi is the ith row of X. The choice p = 1,q = ∞ was considered in [30], while [28] treated the case of p = 1 and arbitrary q. Using p ≤ 1 and q = 2 was suggested in [27], =-=[41]-=-, leading to the iterative algorithm M-FOCUSS. For p = 1,q = 2, the program (63) has a global minimum which M-FOCUSS is proven to find. A nice comparison between these methods can be found in [30]. Eq... |

150 | Sampling moments and reconstructing signals of finite rate of innovation: Shannon meets strang-fix
- Dragotti, Vetterli, et al.
- 2007
(Show Context)
Citation Context ...ith CS results [19], explicit low-rate sampling and recovery methods were developed for such signal sets. Another example of a union of subspaces is the set of finite rate of innovation signals [20], =-=[21]-=-, that are modelled as a weighted sum of shifts of a given generating function, where the shifts are unknown. In this paper, our goal is to develop a unified framework for efficient recovery of signal... |

143 | From theory to practice: Sub-Nyquist sampling of sparse wideband analog signals
- Mishali, Eldar
- 2010
(Show Context)
Citation Context ...ecial case of a union of subspaces is the setting in which the unknown signal has a multiband structure, so that its Fourier transform consists of a limited number of bands at unknown locations [16], =-=[17]-=-. By formulating this problem within the framework of CS, explicit sub-Nyquist sampling and reconstruction schemes were developed in [16], [17] that ensure perfect recovery at the minimal possible rat... |

142 | Theoretical results on sparse representations of multiple-measurement vectors
- Chen, Huo
- 2006
(Show Context)
Citation Context ...f unknown vectors that share a joint sparsity pattern. MMV recovery algorithms were studied in [19], [27]–[30]. Equivalence results based on mutual coherence for a mixed ℓp/ℓ1 program were derived in =-=[28]-=-. These results turn out to be the same as that obtained from a single measurement problem. This is in contrast to the fact that in practice, MMV methods tend to outperform algorithms that treat each ... |

129 |
On the reconstruction of block-sparse signals with an optimal number of measurements
- Stojnic, Parvaresh, et al.
- 2009
(Show Context)
Citation Context ...rily spread throughout the vector. One example in the structured union of subspaces model we treat in this paper. Other examples are considered in [25]. Prior work on recovery of block-sparse vectors =-=[24]-=- assumed consecutive blocks of the same size. It was sown that in this case, when n,N go to infinity, the algorithm (26) will recover the true block-sparse vector with overwhelming probability. Their ... |

110 |
A theory for sampling signals from a union of subspaces
- Lu, Do
- 2007
(Show Context)
Citation Context ...of k subspaces, chosen from a given set of m subspaces Aj,1 ≤ j ≤ m. However, which subspaces comprise the sum is unknown. This setting is a special case of the more general union model considered in =-=[22]-=-, [23]. Conditions under which unique and stable sampling are possible were developed in [22], [23]. However, no concrete algorithm was provided to recover such a signal from a given set of samples in... |

110 | Sampling theorems for signals from the union of finite-dimensional linear subspaces
- Blumensath, Davies
- 2009
(Show Context)
Citation Context ...ubspaces, chosen from a given set of m subspaces Aj,1 ≤ j ≤ m. However, which subspaces comprise the sum is unknown. This setting is a special case of the more general union model considered in [22], =-=[23]-=-. Conditions under which unique and stable sampling are possible were developed in [22], [23]. However, no concrete algorithm was provided to recover such a signal from a given set of samples in a sta... |

100 |
The Shannon sampling theorem—Its various extensions and applications: A tutorial review
- Jerri
- 1977
(Show Context)
Citation Context ... rich history dating back to Cauchy. Undoubtedly, the sampling theorem that had the most impact on signal processing and communications is that associated with Whittaker, Kotelńikov, and Shannon [1], =-=[2]-=-. Their famous result is that a bandlimited function x(t) can be recovered from its uniform samples as long as the sampling rate exceeds the Nyquist rate, corresponding to twice the highest frequency ... |

99 | Blind multiband signal reconstruction: Compressed sensing for analog signals
- Mishali, Eldar
- 2009
(Show Context)
Citation Context ...ht this seems like a difficult problem as our algorithms are inherently finite-dimensional, recovery methods for sparse signals in infinite dimensions have been addressed in some of our previous work =-=[15]-=-–[19]. In particular, we have shown that a signal lying in a union of shift-invariant subspaces can be recovered efficiently from certain sets of sampling functions. In our future work, we intend to c... |

98 | Average case analysis for multichannel sparse recovery using convex relaxation
- Eldar, Rauhut
- 2009
(Show Context)
Citation Context ... improved worst-case behavior, as measured by RIP, over the single channel case. One way to improve the analytical results is to consider an average case analysis instead of a worst-case approach. In =-=[42]-=- we show that if the unknown vectors xi are generated randomly, then the performance improves with increasing number of measurement vectors. The advantage stems from the fact that the situation of equ... |

93 | Reduce and boost: Recovering arbitrary sets of jointly sparse vectors
- Mishali, Eldar
- 2008
(Show Context)
Citation Context ... generalized in [17], [18] to deal with sampling and reconstruction of signals that lie in a finite union of shiftinvariant subspaces. By combining ideas from standard sampling theory with CS results =-=[19]-=-, explicit low-rate sampling and recovery methods were developed for such signal sets. Another example of a union of subspaces is the set of finite rate of innovation signals [20], [21], that are mode... |

66 | Compressed sensing of analog signals in shift-invariant spaces
- Eldar
- 2009
(Show Context)
Citation Context ...ramework of CS, explicit sub-Nyquist sampling and reconstruction schemes were developed in [16], [17] that ensure perfect recovery at the minimal possible rate. This setup was recently generalized in =-=[18]-=-, [19] to deal with sampling and reconstruction of signals that lie in a finite union of shift-invariant subspaces. By combining ideas from standard sampling theory with CS results [20], explicit low-... |

58 |
Condition numbers of random matrices,”
- Szarek
- 1991
(Show Context)
Citation Context ...plies √ 2 − σ 2 ≤ 1 + λ we conclude that ( √1 ) Prob + δ > 1 + λ ≤ Prob(¯σ > 1 + λ) + Prob(σ < 1 − λ). (73) We now bound each term in the right-hand-side of (73) using a result of Davidson and Szarek =-=[43]-=- regarding the concentration of the extreme singular values of a Gaussian matrix. It was proved in [43] that an m × n matrix X with n ≥ m satisfies Prob(σmax(X) > 1 + √ m/n + t) ≤ e −nt2 /2 (74) Prob(... |

48 | General framework for consistent sampling in hilbert spaces - Eldar, Werther - 2005 |

48 | Compressed sensing of block-sparse signals: Uncertainty relations and efficient recovery,” submitted to
- Eldar, Kuppinger, et al.
- 2009
(Show Context)
Citation Context ...our framework by choosing Ai as the space spanned by the ith column of W. In this setting m = N, and there are ( N) k subspaces comprising the union. Another example is the block sparsity model [24], =-=[40]-=- in which x is divided into equal-length blocks of size d, and at most k blocks can be non zero. Such a vector can be described in our setting with H = R N by choosing Ai to be the space spanned by th... |

46 | Oblique dual frames and shift-invariant spaces - Christensen, Eldar |

43 | Generalizations of the sampling theorem: Seven decades after Nyquist - Vaidyanathan - 2001 |

43 | Recovering sparse signals using sparse measurement matrices in compressed dna microarrays,”
- Parvaresh, Vikalo, et al.
- 2008
(Show Context)
Citation Context ...problem can be cast as a convex second-order cone program (SOCP), and solved efficiently using standard software packages. A mixed norm approach for block-sparse recovery was also considered in [26], =-=[27]-=-. By analyzing the measurement operator’s null space, it was shown that asymptotically, as the signal length grows to infinity, and under ideal conditions (no noise or modeling errors), perfect recove... |

38 | Sampling without input constraints: Consistent reconstruction in arbitrary spaces - Eldar |

34 | Generalized sampling theorems in multiresolution subspaces,” - Djokovic, Vaidyanathan - 1997 |

32 | A minimum squared-error framework for generalized sampling
- Eldar, Dvorkind
- 2006
(Show Context)
Citation Context ...715).2 can be shown that under mild technical conditions, a signal x lying in a given subspace can be recovered exactly from its linear generalized samples using a series of filtering operations [4]–=-=[7]-=-. Recently, there has been growing interest in nonlinear signal models in which the unknown x does not necessarily lie in a subspace. In order to ensure recovery from the samples, some underlying stru... |

32 | Sampling and reconstruction in arbitrary spaces and oblique dual frame vectors - Eldar - 2003 |

16 |
Iterative hard thresholding for compressed sensing,” Accepted for publication
- Blumensath, Davies
- 2009
(Show Context)
Citation Context ...latively small. An important question is how many samples are needed roughly in order to guarantee stable recovery. This question is addressed in the following proposition, which quotes a result from =-=[44]-=- based on the proofs of [45]; we rephrase the result to match our notation. Proposition 4 ( [44, Theorem 3.3]): Consider the setting of Proposition 3, namely a random Gaussian matrix D of size n × N a... |

14 | Uncertainty relations for shift-invariant analog signals
- Eldar
- 2009
(Show Context)
Citation Context ...rk of CS, explicit sub-Nyquist sampling and reconstruction schemes were developed in [16], [17] that ensure perfect recovery at the minimal possible rate. This setup was recently generalized in [18], =-=[19]-=- to deal with sampling and reconstruction of signals that lie in a finite union of shift-invariant subspaces. By combining ideas from standard sampling theory with CS results [20], explicit low-rate s... |

13 | Characterization of oblique dual frame pairs
- Eldar, Christensen
(Show Context)
Citation Context ...ction of time x = x(t), or can represent a finite-length vector x = x. The most common type of sampling is linear sampling in which yi = 〈si,x〉, 1 ≤ i ≤ n, (1) for a set of functions si ∈ H [4], [31]–=-=[37]-=-. Here 〈x,y〉 denotes the standard inner product on H. For example, if H = L2 is the space of real finite-energy signals then 〈x,y〉 = ∫ ∞ −∞ x(t)y(t)dt. (2)5 When H = R N for some N, 〈x,y〉 = N∑ x(i)y(... |

12 | Nonlinear and nonideal sampling: theory and methods
- Dvorkind, Eldar, et al.
- 2008
(Show Context)
Citation Context ...ct on H. For example, if H = L2 is the space of real finite-energy signals then 〈x,y〉 = ∫ ∞ −∞ x(t)y(t)dt. (2)5 When H = R N for some N, 〈x,y〉 = N∑ x(i)y(i). (3) i=1 Nonlinear sampling is treated in =-=[38]-=-. However, here our focus will be on the linear case. When H = RN the unknown x = x as well as the sampling functions si = si are vectors in RN . Therefore, the samples can be written conveniently in ... |

12 |
Convex Optimization in Signal Processing and Communications. Cambridge university press,
- Palomar, Eldar
- 2010
(Show Context)
Citation Context ... that x lies in a given subspace A of H [4]–[7]. If A and S have the same finite dimension, and S⊥ and A intersect only at the 0 vector, then x can be perfectly recovered from the samples y [6], [7], =-=[39]-=-. B. Union of Subspaces When subspace information is available, perfect reconstruction can often be guaranteed. Furthermore, recovery can be implemented by a simple linear transformation of the given ... |

10 | Beyond bandlimited sampling: Nonlinearities, smoothness and sparsity
- Eldar, Michaeli
- 2009
(Show Context)
Citation Context ...assumed is that x lies in a given subspace A of H [4]–[7]. If A and S have the same finite dimension, and S⊥ and A intersect only at the 0 vector, then x can be perfectly recovered from the samples y =-=[6]-=-, [7], [39]. B. Union of Subspaces When subspace information is available, perfect reconstruction can often be guaranteed. Furthermore, recovery can be implemented by a simple linear transformation of... |

10 |
Optimization techniques in modern sampling theory
- Michaeli, Eldar
(Show Context)
Citation Context ...very often assumed is that lies in a given subspace of [4]–[7]. If and have the same finite dimension, and and intersect only at the vector, then can be perfectly recovered from the samples [6], [7], =-=[41]-=-. B. Union of Subspaces When subspace information is available, perfect reconstruction can often be guaranteed. Furthermore, recovery can be implemented by a simple linear transformation of the given ... |

9 |
theory to practice: Sub-Nyquist sampling of sparse wideband analog signals
- “From
- 2010
(Show Context)
Citation Context ...e of a union of subspaces is the setting in which the unknown signal x = x(t) has a multiband structure, so that its Fourier transform consists of a limited number of bands at unknown locations [15], =-=[16]-=-. By formulating this problem within the framework of CS, explicit sub-Nyquist sampling and reconstruction schemes were developed in [15], [16] that ensure perfect-recovery at the minimal possible rat... |

7 | Low rate sampling schemes for time delay estimation
- Gedalyahu, Eldar
- 2009
(Show Context)
Citation Context ...ift-invariant subspaces can be recovered efficiently from certain sets of sampling functions. A first step in the direction of treating infinite unions of infinite subspaces is the example studied in =-=[21]-=- in which we treat an infinite union resulting from unknown time delays. In our future work, we intend to combine these results with those in the current paper in order to develop a more general theor... |

7 |
Recovering Sparse Signals Using Sparse Measurement
- Parvaresh, Vikalo, et al.
- 2008
(Show Context)
Citation Context ...problem can be cast as a convex second order cone program (SOCP), and solved efficiently using standard software packages. A mixed norm approach for block-sparse recovery was also considered in [24], =-=[25]-=-. By analyzing the measurement operator’s null space, it was shown that asymptotically, as the signal length grows to infinity, and under ideal conditions (no noise or modeling errors), perfect recove... |

7 |
Model-based compressive sensing,” 2008. Preprint. Available at http://dsp.rice.edu/cs
- Baraniuk, Cevher, et al.
(Show Context)
Citation Context ...time delays. In our future work, we intend to combine these results with those in the current paper in order to develop a more general theory for recovery from a union of subspaces. A recent preprint =-=[48]-=- that was posted online after the submission of this paper proposes a new framework called modelbased compressive sensing (MCS). The MCS approach assumes a vector signal model in which only certain pr... |

2 | Coherence and MUSIC in biomagnetic source localization - Mosher, Lewis, et al. - 1995 |

2 |
Compressed sensing of analog signals in shitinvariant spaces
- Eldar
(Show Context)
Citation Context ...ramework of CS, explicit sub-Nyquist sampling and reconstruction schemes were developed in [15], [16] that ensure perfect-recovery at the minimal possible rate. This setup was recently generalized in =-=[17]-=-, [18] to deal with sampling and reconstruction of signals that lie in a finite union of shiftinvariant subspaces. By combining ideas from standard sampling theory with CS results [19], explicit low-r... |

1 | Model-based compressive sensing,” Preprint, 2008. Yonina C. Eldar (S’98–M’02–SM’07) Yonina C. Eldar received the B.Sc. degree in Physics in 1995 and the B.Sc. degree in Electrical Engineering in 1996 both from Tel-Aviv University (TAU), TelAviv, Israel, a - Baraniuk, Cevher, et al. - 1998 |

1 |
Sampling—50 years after Shannon,” IProc
- Unser
- 2000
(Show Context)
Citation Context ...1 ticular, it can be shown that under mild technical conditions, a signal lying in a given subspace can be recovered exactly from its linear generalized samples using a series of filtering operations =-=[4]-=-–[7]. Recently, there has been growing interest in nonlinear signal models in which the unknown does not necessarily lie in a subspace. In order to ensure recovery from the samples, some underlying st... |