Results 1 - 10
of
11
Efficiently decodable error-correcting list disjunct matrices and applications (Extended Abstract)
- IN ICALP
, 2011
"... A (d, `)-list disjunct matrix is a non-adaptive group testing primitive which, given a set of items with at most d “defectives,” outputs a superset of the defectives containing less than ` non-defective items. The primitive has found many applications as stand alone objects and as building blocks in ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
A (d, `)-list disjunct matrix is a non-adaptive group testing primitive which, given a set of items with at most d “defectives,” outputs a superset of the defectives containing less than ` non-defective items. The primitive has found many applications as stand alone objects and as building blocks in the construction of other combinatorial objects. This paper studies error-tolerant list disjunct matrices which can correct up to e0 false positive and e1 false negative tests in sub-linear time. We then use list-disjunct matrices to prove new results in three different applications. Our major contributions are as follows. (1) We prove several (almost)-matching lower and upper bounds for the optimal number of tests, in-cluding the fact that Θ(d log(n/d) + e0 + de1) tests is necessary and sufficient when ` = Θ(d). Similar results are also derived for the dis-junct matrix case (i.e. ` = 1). (2) We present two methods that convert error-tolerant list disjunct matrices in a black-box manner into error-tolerant list disjunct matrices that are also efficiently decodable. The methods help us derive a family of (strongly) explicit constructions of list-disjunct matrices which are either optimal or near optimal, and which are also efficiently decodable. (3) We show how to use error-correcting efficiently decodable list-disjunct matrices in three different applications: (i) explicit constructions of d-disjunct matrices with t = O(d2 logn+ rd) tests which are decodable in poly(t) time, where r is the maximum num-ber of test errors. This result is optimal for r = Ω(d logn), and even for r = 0 this result improves upon known results; (ii) (explicit) con-structions of (near)-optimal, error-correcting, and efficiently decodable monotone encodings; and (iii) (explicit) constructions of (near)-optimal, error-correcting, and efficiently decodable multiple user tracing families.
The Power of an Example: Hidden Set Size Approximation Using Group Queries and Conditional Sampling
, 2014
"... ar ..."
(Show Context)
GROTESQUE: Noisy Group Testing (Quick and Efficient)
, 2013
"... Group-testing refers to the problem of identifying (with high probability) a (small) subset of D defectives from a (large) set of N items via a “small ” number of “pooled ” tests (i.e., tests that have a positive outcome if at least one of the items being tested in the pool is defective, else have a ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Group-testing refers to the problem of identifying (with high probability) a (small) subset of D defectives from a (large) set of N items via a “small ” number of “pooled ” tests (i.e., tests that have a positive outcome if at least one of the items being tested in the pool is defective, else have a negative outcome). For ease of presentation in this work we focus on the regime when D = O (N1−δ) for some δ> 0. The tests may be noiseless or noisy, and the testing procedure may be adaptive (the pool defining a test may depend on the outcome of a previous test), or non-adaptive (each test is performed independent of the outcome of other tests). A rich body of literature demonstrates that Θ(D log(N)) tests are information-theoretically necessary and sufficient for the group-testing problem, and provides algorithms that achieve this performance. However, it is only recently that reconstruction algorithms with computational complexity that is sub-linear in N have started being investigated (recent work by [1], [2], [3] gave some of the first such algorithms). In the scenario with adaptive tests with noisy outcomes, we present the first scheme that is simultaneously order-optimal (up to small constant factors) in both the number of tests and the decoding complexity (O (D log(N)) in both the performance metrics). The total number of stages of our adaptive algorithm is “small ” (O (log(D))). Similarly, in the scenario with non-adaptive tests with noisy outcomes, we present the first scheme that is simultaneously near-optimal in both the number of tests and the decoding complexity (via an algorithm that requires O (D log(D) log(N)) tests and has a decoding complexity of O(D(logN + log2D)). Finally, we present an adaptive algorithm that only requires 2 stages, and for which both the number of tests and the decoding complexity scale as O(D(logN + log2D)). For all three settings the probability of error of our algorithms scales as O (1/(poly(D)).
Sublinear Time Algorithms for the Sparse Recovery Problem
, 2013
"... Foremost, I would like to express my deepest appreciation to my adviser, Prof. Martin Strauss, for his continuous support and guidance of my Ph.D. study and research. I am also indebted to Prof. Anna Gilbert deeply for introducing me to various workshop opportunities in addition to her guidance of m ..."
Abstract
- Add to MetaCart
(Show Context)
Foremost, I would like to express my deepest appreciation to my adviser, Prof. Martin Strauss, for his continuous support and guidance of my Ph.D. study and research. I am also indebted to Prof. Anna Gilbert deeply for introducing me to various workshop opportunities in addition to her guidance of my research. My sincere gratitude also goes to my collaborators, Prof. Ely Porat at Bar Ilan University and Dr David Woodruff at IBM Almaden Research Center, both of whom possess numerous sparkling ideas. I learnt a lot from them, and I must also thank them for their patience. In particular, I am very grateful to David Woodruff for the many long phone discussions. I would also like to thank my thesis committee members, Assoc. Prof. Kevin Compton, Prof. Alfred Hero III and Assoc. Prof. Yaoyun Shi, for their reviews and helpful comments. As a non-driver in Ann Arbor, I am obliged to Caoxie Zhang and Hsin-hao Su, who offered me several important rides in my hour of need. I also want to thank Qian Li for offering me the rides while we were on our internship at IBM Almaden Research Center. Special thanks to Hsin-hao Su for acquainting me with a number of restaurants in the Ann Arbor area. Many thanks to other friends and colleagues who have made made my life in Ann Arbor more enjoyable, including but not limited
Contents lists available at SciVerse ScienceDirect Signal Processing
"... journal homepage: www.elsevier.com/locate/sigpro Distributed sensor failure detection in sensor networks ..."
Abstract
- Add to MetaCart
(Show Context)
journal homepage: www.elsevier.com/locate/sigpro Distributed sensor failure detection in sensor networks
Sub-linear Time Compressed Sensing for Support Recovery using Sparse-Graph Codes
, 2015
"... We address the problem of robustly recovering the support of high-dimensional sparse signals1 from linear measurements in a low-dimensional subspace. We introduce a new compressed sensing framework through carefully designed sparse measurement matrices associated with low measurement costs and low-c ..."
Abstract
- Add to MetaCart
We address the problem of robustly recovering the support of high-dimensional sparse signals1 from linear measurements in a low-dimensional subspace. We introduce a new compressed sensing framework through carefully designed sparse measurement matrices associated with low measurement costs and low-complexity recovery algorithms. The measurement system in our framework captures observations of the signal through well-designed measurement matrices sparsified by capacity-approaching sparse-graph codes, and then recovers the signal by using a simple peeling decoder. As a result, we can simultaneously reduce both the measurement cost and the computational complexity. In this paper, we formally connect general sparse recovery problems in compressed sensing with sparse-graph decoding in packet-communication systems, and analyze our design in terms of the measurement cost, computational complexity and recovery performance. Specifically, by structuring the measurements through sparse-graph codes, we propose two families of mea-surement matrices, the Fourier family and the binary family respectively, which lead to different measurement and computational costs. In the noiseless setting, our framework recovers the sparse support of any K-sparse signal in time2 O(K) with 2K measurements obtained by the Fourier family, or in time O(K logN) using K log2N + K measurements obtained by the binary family. In the presence of noise, both measurement and