Results 1  10
of
120
Sparse Recovery Using Sparse Matrices
"... We survey algorithms for sparse recovery problems that are based on sparse random matrices. Such matrices has several attractive properties: they support algorithms with low computational complexity, and make it easy to perform incremental updates to signals. We discuss applications to several areas ..."
Abstract

Cited by 74 (12 self)
 Add to MetaCart
We survey algorithms for sparse recovery problems that are based on sparse random matrices. Such matrices has several attractive properties: they support algorithms with low computational complexity, and make it easy to perform incremental updates to signals. We discuss applications to several areas, including compressive sensing, data stream computing and group testing.
Efficient and Robust Compressed Sensing using Optimized Expander Graphs
"... Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any ndimensional vector that is ksparse can be fully recovered using O(k log n) measurements and only O(k log n) simple recovery iterations. In this paper we imp ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
(Show Context)
Expander graphs have been recently proposed to construct efficient compressed sensing algorithms. In particular, it has been shown that any ndimensional vector that is ksparse can be fully recovered using O(k log n) measurements and only O(k log n) simple recovery iterations. In this paper we improve upon this result by considering expander graphs with expansion coefficient beyond 3 and show that, with the same number of 4 measurements, only O(k) recovery iterations are required, which is a significant improvement when n is large. In fact, full recovery can be accomplished by at most 2k very simple iterations. The number of iterations can be reduced arbitrarily close to k, and the recovery algorithm can be implemented very efficiently using a simple priority queue with total recovery time O ( n log ( )) n k We also show that by tolerating a small penalty on the number of measurements, and not on the number of recovery iterations, one can use the efficient construction of a family of expander graphs to come up with explicit measurement matrices for this method. We compare our result with other recently developed expandergraphbased methods and argue that it compares favorably both in terms of the number of required measurements and in terms of the time complexity and the simplicity of recovery. Finally we will show how our analysis extends to give a robust algorithm that finds the position and sign of the k significant elements of an almost ksparse signal and then, using very simple optimization techniques, finds a ksparse signal which is close to the best kterm approximation of the original signal.
Simulating Independence: New Constructions of Condensers, Ramsey Graphs, Dispersers, and Extractors
 In Proceedings of the 37th Annual ACM Symposium on Theory of Computing
, 2005
"... We present new explicit constructions of deterministic randomness extractors, dispersers and related objects. More precisely, a distribution X over binary strings of length n is called a δsource if it assigns probability at most 2 −δn to any string of length n, and for any δ> 0 we construct the ..."
Abstract

Cited by 41 (10 self)
 Add to MetaCart
We present new explicit constructions of deterministic randomness extractors, dispersers and related objects. More precisely, a distribution X over binary strings of length n is called a δsource if it assigns probability at most 2 −δn to any string of length n, and for any δ> 0 we construct the following poly(n)time computable functions: 2source disperser: D: ({0, 1} n) 2 → {0, 1} such that for any two independent δsources X1, X2 we have that the support of D(X1, X2) is {0, 1}. Bipartite Ramsey graph: Let N = 2 n. A corollary is that the function D is a 2coloring of the edges of KN,N (the complete bipartite graph over two sets of N vertices) such that any induced subgraph of size N δ by N δ is not monochromatic. 3source extractor: E: ({0, 1} n) 2 → {0, 1} such that for any three independent δsources X1, X2, X3 we have that E(X1, X2, X3) is (o(1)close to being) an unbiased random bit. No previous explicit construction was known for either of these, for any δ < 1/2 and these results constitute major progress to longstanding open problems. A component in these results is a new construction of condensers that may be of independent
Explicit Codes Achieving List Decoding Capacity: Errorcorrection with Optimal Redundancy
, 2008
"... We present errorcorrecting codes that achieve the informationtheoretically best possible tradeoff between the rate and errorcorrection radius. Specifically, for every 0 < R < 1 and ε> 0, we present an explicit construction of errorcorrecting codes of rate R that can be list decoded in ..."
Abstract

Cited by 40 (15 self)
 Add to MetaCart
(Show Context)
We present errorcorrecting codes that achieve the informationtheoretically best possible tradeoff between the rate and errorcorrection radius. Specifically, for every 0 < R < 1 and ε> 0, we present an explicit construction of errorcorrecting codes of rate R that can be list decoded in polynomial time up to a fraction (1−R−ε) of worstcase errors. At least theoretically, this meets one of the central challenges in algorithmic coding theory. Our codes are simple to describe: they are folded ReedSolomon codes, which are in fact exactly ReedSolomon (RS) codes, but viewed as a code over a larger alphabet by careful bundling of codeword symbols. Given the ubiquity of RS codes, this is an appealing feature of our result, and in fact our methods directly yield better decoding algorithms for RS codes when errors occur in phased bursts. The alphabet size of these folded RS codes is polynomial in the block length. We are able to reduce this to a constant (depending on ε) using ideas concerning “list recovery ” and expanderbased codes from [11, 12]. Concatenating the folded RS codes with suitable inner codes also gives us polynomial time constructible binary codes that can be efficiently list decoded up to the Zyablov bound, i.e., up to twice the radius achieved by the standard GMD decoding of concatenated codes.
Extensions to the Method of Multiplicities, with applications to Kakeya Sets and Mergers
, 2009
"... We extend the “method of multiplicities ” to get the following results, of interest in combinatorics and randomness extraction. 1. We show that every Kakeya set in F n q, the ndimensional vector space over the finite field on q elements, must be of size at least q n /2 n. This bound is tight to wit ..."
Abstract

Cited by 38 (6 self)
 Add to MetaCart
(Show Context)
We extend the “method of multiplicities ” to get the following results, of interest in combinatorics and randomness extraction. 1. We show that every Kakeya set in F n q, the ndimensional vector space over the finite field on q elements, must be of size at least q n /2 n. This bound is tight to within a 2 + o(1) factor for every n as q → ∞. 2. We give improved “randomness mergers”, i.e., seeded functions that take as input k (possibly correlated) random variables in {0, 1} N and a short random seed and output a single random variable in {0, 1} N that is statistically close to having entropy (1−δ)·N when one of the k input variables is distributed uniformly. The seed we require is only (1/δ)·log kbits long, which significantly improves upon previous construction of mergers. The “method of multiplicities”, as used in prior work, analyzed subsets of vector spaces over finite fields by constructing somewhat low degree interpolating polynomials that vanish on every point in the subset with high multiplicity. The typical use of this method involved showing that the interpolating polynomial also vanished on some points outside the subset, and then used simple
Nonmalleable extractors and symmetric key cryptography from weak secrets
 In Proceedings of the 41stACM Symposium on the Theory of Computing
, 2009
"... We study the question of basing symmetric key cryptography on weak secrets. In this setting, Alice and Bob share an nbit secret W, which might not be uniformly random, but the adversary has at least k bits of uncertainty about it (formalized using conditional minentropy). Since standard symmetrick ..."
Abstract

Cited by 36 (11 self)
 Add to MetaCart
We study the question of basing symmetric key cryptography on weak secrets. In this setting, Alice and Bob share an nbit secret W, which might not be uniformly random, but the adversary has at least k bits of uncertainty about it (formalized using conditional minentropy). Since standard symmetrickey primitives require uniformly random secret keys, we would like to construct an authenticated key agreement protocol in which Alice and Bob use W to agree on a nearly uniform key R, by communicating over a public channel controlled by an active adversary Eve. We study this question in the information theoretic setting where the attacker is computationally unbounded. We show that singleround (i.e. one message) protocols do not work when k ≤ n 2, and require poor parameters even when n 2 < k ≪ n. On the other hand, for arbitrary values of k, we design a communication efficient tworound (challengeresponse) protocol extracting nearly k random bits. This dramatically improves the previous construction of Renner and Wolf [RW03], which requires Θ(λ + log(n)) rounds where λ is the security parameter. Our solution takes a new approach by studying and constructing “nonmalleable” seeded randomness extractors — if an attacker sees a random seed X and comes up with an arbitrarily related seed X ′, then we bound the relationship between R = Ext(W; X) and R ′ = Ext(W; X ′). We also extend our tworound key agreement protocol to the “fuzzy ” setting, where Alice and Bob share “close ” (but not equal) secrets WA and WB, and to the Bounded Retrieval Model (BRM) where the size of the secret W is huge.
Learning submodular functions
 IN PROCEEDINGS OF THE 43RD ANNUAL ACM SYMPOSIUM ON THEORY OF COMPUTING
, 2011
"... Submodular functions are discrete functions that model laws of diminishing returns and enjoy numerous algorithmic applications that have been used in many areas, including combinatorial optimization, machine learning, and economics. In this work we use a learning theoretic angle for studying submodu ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
Submodular functions are discrete functions that model laws of diminishing returns and enjoy numerous algorithmic applications that have been used in many areas, including combinatorial optimization, machine learning, and economics. In this work we use a learning theoretic angle for studying submodular functions. We provide algorithms for learning submodular functions, as well as lower bounds on their learnability. In doing so, we uncover several novel structural results revealing both extremal properties as well as regularities of submodular functions, of interest to many areas.
NearOptimal Sparse Recovery in the L1 norm
"... We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x ∈ Rn from its lowerdimensional sketch Ax ∈ Rm. Specifically, we focus on the sparse recovery problem in the L1 norm: for a parameter k, given the sketch Ax, compute an appro ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x ∈ Rn from its lowerdimensional sketch Ax ∈ Rm. Specifically, we focus on the sparse recovery problem in the L1 norm: for a parameter k, given the sketch Ax, compute an approximation ˆx of x such that the L1 approximation error ‖x − ˆx‖1 is close to minx ′ ‖x − x ′ ‖1, where x ′ ranges over all vectors with at most k terms. The sparse recovery problem has been subject to extensive research over the last few years. Many solutions to this problem have been discovered, achieving different tradeoffs between various attributes, such as the sketch length, encoding and recovery times. In this paper we provide a sparse recovery scheme which achieves close to optimal performance on virtually all attributes (see Figure 1). In particular, this is the first scheme that guarantees O(k log(n/k)) sketch length, and nearlinear O(n log(n/k)) recovery time simultaneously. It also features low encoding and update times, and is noiseresilient. 1
Privacy Amplification with Asymptotically Optimal Entropy Loss
, 2010
"... We study the problem of “privacy amplification”: key agreement between two parties who both know a weak secret w, such as a password. (Such a setting is ubiquitous on the internet, where passwords are the most commonly used security device.) We assume that the key agreement protocol is taking place ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
We study the problem of “privacy amplification”: key agreement between two parties who both know a weak secret w, such as a password. (Such a setting is ubiquitous on the internet, where passwords are the most commonly used security device.) We assume that the key agreement protocol is taking place in the presence of an active computationally unbounded adversary Eve. The adversary may have partial knowledge about w, so we assume only that w has some entropy from Eve’s point of view. Thus, the goal of the protocol is to convert this nonuniform secret w into a uniformly distributed string R that is fully secret from Eve. R may then be used as a key for running symmetric cryptographic protocols (such as encryption, authentication, etc.). Because we make no computational assumptions, the entropy in R can come only from w. Thus such a protocol must minimize the entropy loss during its execution, so that R is as long as possible. The best previous results have entropy loss of Θ(κ 2), where κ is the security parameter, thus requiring the password to be very long even for small values of κ. In this work, we present the first protocol for informationtheoretic key agreement that has entropy loss linear in the security parameter. The result is optimal up
GraphConstrained Group Testing
, 2010
"... Nonadaptive group testing involves grouping arbitrary subsets of n items into different pools. Each pool is then tested and defective items are identified. A fundamental question involves minimizing the number of pools required to identify at most d defective items. Motivated by applications in net ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
(Show Context)
Nonadaptive group testing involves grouping arbitrary subsets of n items into different pools. Each pool is then tested and defective items are identified. A fundamental question involves minimizing the number of pools required to identify at most d defective items. Motivated by applications in network tomography, sensor networks and infection propagation we formulate group testing problems on graphs. Unlike conventional group testing problems each group here must conform to the constraints imposed by a graph. For instance, items can be associated with vertices and each pool is any set of nodes that must be path connected. In this paper we associate a test with a random walk. In this context conventional group testing corresponds to the special case of a complete graph on n vertices. For interesting classes of graphs we arrive at a rather surprising result, namely, that the number of tests required to identify d defective items is substantially similar to that required in conventional group testing problems, where no such constraints on pooling is imposed. Specifically, if T (n) corresponds to the mixing time of the graph G, we show that with m = O(d 2 T 2 (n) log(n/d)) nonadaptive tests, one can identify the defective items. Consequently, for the ErdősRényi random graph G(n, p), as well as expander graphs with constant spectral gap, it follows that m = O(d 2 log 3 n) nonadaptive tests