#### DMCA

## 0 A Simple Message-Passing Algorithm for Compressed Sensing

### Citations

3598 | Compressed sensing
- Donoho
- 2006
(Show Context)
Citation Context ...r flavor are well-known in the context of coding, but have only begun to be explored in the context of compressed sensing. As background, there is now a large body of work in compressed sensing. Both =-=[1]-=- and [2], [3] proposed using linear programming (LP) to find the sparsest solution to y = Ax. Since then, many algorithms have been proposed [4]– [13]—see, e.g., [13] for a summary of various combinat... |

2608 | Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Candès, Romberg, et al.
(Show Context)
Citation Context ... are well-known in the context of coding, but have only begun to be explored in the context of compressed sensing. As background, there is now a large body of work in compressed sensing. Both [1] and =-=[2]-=-, [3] proposed using linear programming (LP) to find the sparsest solution to y = Ax. Since then, many algorithms have been proposed [4]– [13]—see, e.g., [13] for a summary of various combinations of ... |

1384 | Stable signal recovery from incomplete and inaccurate measurements
- Candès, Romberg, et al.
(Show Context)
Citation Context ...well-known in the context of coding, but have only begun to be explored in the context of compressed sensing. As background, there is now a large body of work in compressed sensing. Both [1] and [2], =-=[3]-=- proposed using linear programming (LP) to find the sparsest solution to y = Ax. Since then, many algorithms have been proposed [4]– [13]—see, e.g., [13] for a summary of various combinations of measu... |

1362 | Low-density parity-check codes
- Gallager
- 1963
(Show Context)
Citation Context ...ng theory side, Gallager introduced a class of binary linear codes known as low-density parity check (LDPC) codes, and proposed a computationally efficient messagepassing algorithm for their decoding =-=[18]-=-. Since then, an enormous body of work has analyzed the performance of messagepassing algorithms for decoding such codes. In particular, [14] showed that when the parity check matrix of an LDPC code c... |

910 | Greed is good: Algorithmic results for sparse approximation - Tropp - 2004 |

763 | CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,”
- Needell, Tropp
- 2009
(Show Context)
Citation Context ...is O(n(log(n k ))2 log(k)), and the number of measurements used is m = O(k log(n k )). In the regime where k scales linearly with n, our algorithm is faster than almost all existing algorithms, e.g., =-=[5]-=-, [9], [10]; the only exception is [13], which is faster, and stronger, in that the multiplier O(n k ) in the `1/`1 guarantee is only (1 + ). However, relative to the algorithm of [13], ours has the ... |

413 | An improved data stream summary: The count-min sketch and its applications - Cormode, Muthukrishnan |

339 | Expander codes
- Sipser, Spielman
- 1996
(Show Context)
Citation Context ... typically has only O(n) nonzero entries). Examples include the algorithms of [11]–[13]. In particular, Algorithm 1 from [11] can be viewed as essentially the SipserSpielman message-passing algorithm =-=[14]-=-. The algorithm we consider in this paper also falls into the second class, and is a This work was supported in part by NSF under Grant No. CCF-0635191, and by a grant from Microsoft Research. minor v... |

163 | Message passing algorithms for compressed sensing
- Donoho, Maleki, et al.
- 2009
(Show Context)
Citation Context ...nd by a grant from Microsoft Research. minor variant of the algorithm proposed in [15]. Very recent work on the use of a message-passing algorithm to identify compressed sensing thresholds appears in =-=[16]-=-, [17]. Relative to the present paper, [16] and [17] are more general in that arbitrary (i.e., even dense) matrices A are considered. However, [16], [17] restrict attention to a probabilistic analysis... |

115 | Decoding Error-Correcting Codes via Linear Programming
- FELDMAN
- 2003
(Show Context)
Citation Context ...of message-passing algorithms, including common algorithms such as so-called “Gallager A” and “B” also correct a constant fraction of (adversarial) errors when there is sufficient expansion. Finally, =-=[20]-=- suggested decoding LDPC codes via LP, and [21] proved that this LP decoder can correct a constant fraction of (adversarial) errors when there is sufficient expansion. We show that similar techniques ... |

113 | Combinatorial algorithms for compressed sensing,” ser.
- Cormode, Muthukrishnan
- 2006
(Show Context)
Citation Context ...is now a large body of work in compressed sensing. Both [1] and [2], [3] proposed using linear programming (LP) to find the sparsest solution to y = Ax. Since then, many algorithms have been proposed =-=[4]-=-– [13]—see, e.g., [13] for a summary of various combinations of measurement matrices and algorithms, and their associated performance characteristics. Most existing combinations fall into two broad cl... |

108 | One Sketch for All: Fast Algorithms for Compressed Sensing,
- Gilbert, Strauss, et al.
- 2007
(Show Context)
Citation Context ...(n k ))2 log(k)), and the number of measurements used is m = O(k log(n k )). In the regime where k scales linearly with n, our algorithm is faster than almost all existing algorithms, e.g., [5], [9], =-=[10]-=-; the only exception is [13], which is faster, and stronger, in that the multiplier O(n k ) in the `1/`1 guarantee is only (1 + ). However, relative to the algorithm of [13], ours has the advantage o... |

80 | Efficient compressive sensing with deterministic guarantees using expander graphs
- Xu, Hassibi
- 2007
(Show Context)
Citation Context ... or convex optimization. The second class consists of combinatorial algorithms operating on sparse measurement matrices (A typically has only O(n) nonzero entries). Examples include the algorithms of =-=[11]-=-–[13]. In particular, Algorithm 1 from [11] can be viewed as essentially the SipserSpielman message-passing algorithm [14]. The algorithm we consider in this paper also falls into the second class, an... |

68 | LP decoding corrects a constant fraction of errors
- Feldman, Malkin, et al.
- 2007
(Show Context)
Citation Context ... algorithms such as so-called “Gallager A” and “B” also correct a constant fraction of (adversarial) errors when there is sufficient expansion. Finally, [20] suggested decoding LDPC codes via LP, and =-=[21]-=- proved that this LP decoder can correct a constant fraction of (adversarial) errors when there is sufficient expansion. We show that similar techniques can be used to analyze the performance of the m... |

57 | Counter braids: A novel counter architecture for per-flow measurement,” in
- Lu, Montanari, et al.
- 2008
(Show Context)
Citation Context ...paper also falls into the second class, and is a This work was supported in part by NSF under Grant No. CCF-0635191, and by a grant from Microsoft Research. minor variant of the algorithm proposed in =-=[15]-=-. Very recent work on the use of a message-passing algorithm to identify compressed sensing thresholds appears in [16], [17]. Relative to the present paper, [16] and [17] are more general in that arbi... |

54 | Explicit constructions for compressed sensing of sparse signals. - Indyk - 2008 |

41 |
Practical near-optimal sparse recovery
- Berinde, Indyk, et al.
(Show Context)
Citation Context ...w a large body of work in compressed sensing. Both [1] and [2], [3] proposed using linear programming (LP) to find the sparsest solution to y = Ax. Since then, many algorithms have been proposed [4]– =-=[13]-=-—see, e.g., [13] for a summary of various combinations of measurement matrices and algorithms, and their associated performance characteristics. Most existing combinations fall into two broad classes.... |

32 |
Algorithmic linear dimension reduction in the l1 norm for sparse vectors
- Gilbert, Strauss, et al.
- 2006
(Show Context)
Citation Context ...n(log(n k ))2 log(k)), and the number of measurements used is m = O(k log(n k )). In the regime where k scales linearly with n, our algorithm is faster than almost all existing algorithms, e.g., [5], =-=[9]-=-, [10]; the only exception is [13], which is faster, and stronger, in that the multiplier O(n k ) in the `1/`1 guarantee is only (1 + ). However, relative to the algorithm of [13], ours has the advan... |

27 | Sequential sparse matching pursuit. - Berinde, Indyk - 2009 |

22 | Sparse recovery of positive signals with minimal expansion,” 2009 [Online]. Available: http://arxiv.org/abs/0902.4045 [arxiv.org
- Khajehnejad, Dimakis, et al.
(Show Context)
Citation Context ...ed to the Sipser-Spielman algorithm [14], this algorithm requires less expansion (0.5 vs. 0.75), but the Sipser-Spielman algorithm works for arbitrary (i.e., not just nonnegative) vectors x. Finally, =-=[22]-=- shows that recovery of nonnegative x is possible from far less expansion, but their algorithm is significantly slower, with a running time of O(nk2). As our second result, on approximate recovery, we... |

14 | Expander graph arguments for message passing algorithms
- Burshtein, Miller
- 2001
(Show Context)
Citation Context ...rresponds to the adjacency matrix of a bipartite graph with sufficient expansion, a bit-flipping algorithm can correct a constant fraction of errors, even if the errors are chosen by an adversary. In =-=[19]-=-, this result is extended by showing that a broad class of message-passing algorithms, including common algorithms such as so-called “Gallager A” and “B” also correct a constant fraction of (adversari... |