Results 1  10
of
133
Exponential lower bound for 2query locally decodable codes via a quantum argument
 JOURNAL OF COMPUTER AND SYSTEM SCIENCES
, 2003
"... A locally decodable code encodes nbit strings x in mbit codewords C(x) in such a way that one can recover any bit xi from a corrupted codeword by querying only a few bits of that word. We use a quantum argument to prove that LDCs with 2 classical queries require exponential length: m = 2 \Omega ( ..."
Abstract

Cited by 134 (15 self)
 Add to MetaCart
A locally decodable code encodes nbit strings x in mbit codewords C(x) in such a way that one can recover any bit xi from a corrupted codeword by querying only a few bits of that word. We use a quantum argument to prove that LDCs with 2 classical queries require exponential length: m = 2 \Omega (n). Previously this was known only for linear codes (Goldreich et al. 02). The
Towards 3Query Locally Decodable Codes of Subexponential Length
, 2008
"... A qquery Locally Decodable Code (LDC) encodes an nbit message x as an Nbit codeword C(x), such that one can probabilistically recover any bit xi of the message by querying only q bits of the codeword C(x), even after some constant fraction of codeword bits has been corrupted. We give new const ..."
Abstract

Cited by 72 (6 self)
 Add to MetaCart
(Show Context)
A qquery Locally Decodable Code (LDC) encodes an nbit message x as an Nbit codeword C(x), such that one can probabilistically recover any bit xi of the message by querying only q bits of the codeword C(x), even after some constant fraction of codeword bits has been corrupted. We give new constructions of three query LDCs of vastly shorter length than that of previous constructions. Specifically, given any Mersenne prime p = 2t −1, we design three query LDCs of length N = exp(O(n1/t)), for every n. Based on the largest known Mersenne prime, this translates to a length of less than exp(O(n10−7)), compared to exp(O(n1/2)) in the previous constructions. It has often been conjectured that there are infinitely many Mersenne primes. Under this conjecture, our constructions yield three query locally decodable codes of length N = exp(nO ( 1log log n)) for infinitely many n. We also obtain analogous improvements for Private Information Retrieval (PIR) schemes. We give 3server PIR schemes with communication complexity of O(n10−7) to access an nbit database, compared to the previous best scheme with complexity O(n1/5.25). Assuming again that there are infinitely many Mersenne primes, we get 3server PIR schemes of communication complexity n O ( 1log log n) for infinitely many n. Previous families of LDCs and PIR schemes were based on the properties of lowdegree multivariate polynomials over finite fields. Our constructions are completely different and are obtained by constructing a large number of vectors in a small dimensional vector space whose inner products are restricted to lie in an algebraically nice set.
Locally Testable Codes and PCPs of AlmostLinear Length
, 2002
"... Locally testable codes are errorcorrecting codes that admit very efficient codeword tests. Specifically, using ..."
Abstract

Cited by 69 (18 self)
 Add to MetaCart
(Show Context)
Locally testable codes are errorcorrecting codes that admit very efficient codeword tests. Specifically, using
Some Applications of Coding Theory in Computational Complexity
, 2004
"... Errorcorrecting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locallytestable and locallydecodable errorcorrecting codes, and their applications to complexity theory ..."
Abstract

Cited by 65 (2 self)
 Add to MetaCart
(Show Context)
Errorcorrecting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locallytestable and locallydecodable errorcorrecting codes, and their applications to complexity theory and to cryptography.
3Query Locally Decodable Codes of Subexponential Length
, 2008
"... Locally Decodable Codes (LDC) allow one to decode any particular symbol of the input message by making a constant number of queries to a codeword, even if a constant fraction of the codeword is damaged. In a recent work [Yek08] Yekhanin constructs a log n log log n 3query LDC with subexponential l ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
(Show Context)
Locally Decodable Codes (LDC) allow one to decode any particular symbol of the input message by making a constant number of queries to a codeword, even if a constant fraction of the codeword is damaged. In a recent work [Yek08] Yekhanin constructs a log n log log n 3query LDC with subexponential length of size exp(exp(O ())). However, this construction requires a conjecture that there are infinitely many Mersenne primes. In this paper we give the first unconditional constant query LDC construction with subexponantial codeword length. In addition our construction reduces codeword length. We give construction of 3query LDC with codeword length exp(exp(O ( √ log n log log n))). Our construction also could be extended to higher number of queries. We give a 2rquery LDC with length of exp(exp(O ( r √ log n(log log n) r−1))). 1
Lower Bounds for Linear Locally Decodable Codes and Private Information Retrieval
, 2002
"... We prove that if a linear errorcorrecting code C : f0; 1g is such that a bit of the message can be probabilistically reconstructed by looking at two entries of a corrupted codeword, then 2\Omega\Gamma n) . We also present several extensions of this result. ..."
Abstract

Cited by 54 (4 self)
 Add to MetaCart
(Show Context)
We prove that if a linear errorcorrecting code C : f0; 1g is such that a bit of the message can be probabilistically reconstructed by looking at two entries of a corrupted codeword, then 2\Omega\Gamma n) . We also present several extensions of this result.
The complexity of online memory checking
 In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
, 2005
"... We consider the problem of storing a large file on a remote and unreliable server. To verify that the file has not been corrupted, a user could store a small private (randomized) “fingerprint” on his own computer. This is the setting for the wellstudied authentication problem in cryptography, and t ..."
Abstract

Cited by 53 (3 self)
 Add to MetaCart
(Show Context)
We consider the problem of storing a large file on a remote and unreliable server. To verify that the file has not been corrupted, a user could store a small private (randomized) “fingerprint” on his own computer. This is the setting for the wellstudied authentication problem in cryptography, and the required fingerprint size is well understood. We study the problem of sublinear authentication: suppose the user would like to encode and store the file in a way that allows him to verify that it has not been corrupted, but without reading the entire file. If the user only wants to read q bits of the file, how large does the size s of the private fingerprint need to be? We define this problem formally, and show a tight lower bound on the relationship between s and q when the adversary is not computationally bounded, namely: s × q = Ω(n), where n is the file size. This is an easier case of the online memory checking problem, introduced by Blum et al. in 1991, and hence the same (tight) lower bound applies also to that problem. It was previously shown that when the adversary is computationally bounded, under the assumption that oneway functions exist, it is possible to construct much better online memory checkers. T he same is also true for sublinear authentication schemes. We show that the existence of oneway functions is also a necessary condition: even slightly breaking the s × q = Ω(n) lower bound in a computational setting implies the existence of oneway functions. 1
Two Query PCP with SubConstant Error
, 2008
"... We show that the N PComplete language 3SAT has a PCP verifier that makes two queries to a proof of almostlinear size and achieves subconstant probability of error o(1). The verifier performs only projection tests, meaning that the answer to the first query determines at most one accepting answer ..."
Abstract

Cited by 51 (5 self)
 Add to MetaCart
(Show Context)
We show that the N PComplete language 3SAT has a PCP verifier that makes two queries to a proof of almostlinear size and achieves subconstant probability of error o(1). The verifier performs only projection tests, meaning that the answer to the first query determines at most one accepting answer to the second query. Previously, by the parallel repetition theorem, there were PCP Theorems with twoquery projection tests, but only (arbitrarily small) constant error and polynomial size [29]. There were also PCP Theorems with subconstant error and almostlinear size, but a constant number of queries that is larger than 2 [26]. As a corollary, we obtain a host of new results. In particular, our theorem improves many of the hardness of approximation results that are proved using the parallel repetition theorem. A partial list includes the following: 1. 3SAT cannot be efficiently approximated to within a factor of 7 8 + o(1), unless P = N P. This holds even under almostlinear reductions. Previously, the best known N Phardness
Extractors: Optimal up to Constant Factors
 STOC'03
, 2003
"... This paper provides the first explicit construction of extractors which are simultaneously optimal up to constant factors in both seed length and output length. More precisely, for every n, k, our extractor uses a random seed of length O(log n) to transform any random source on n bits with (min)ent ..."
Abstract

Cited by 50 (12 self)
 Add to MetaCart
This paper provides the first explicit construction of extractors which are simultaneously optimal up to constant factors in both seed length and output length. More precisely, for every n, k, our extractor uses a random seed of length O(log n) to transform any random source on n bits with (min)entropy k, into a distribution on (1 − α)k bits that is ɛclose to uniform. Here α and ɛ can be taken to be any positive constants. (In fact, ɛ can be almost polynomially small). Our improvements are obtained via three new techniques, each of which may be of independent interest. The first is a general construction of mergers [22] from locally decodable errorcorrecting codes. The second introduces new condensers that have constant seed length (and retain a constant fraction of the minentropy in the random source). The third is a way to augment the “winwin repeated condensing” paradigm of [17] with error reduction techniques like [15] so that the our constant seedlength condensers can be used without error accumulation.