Results 1  10
of
97
Improved Decoding of ReedSolomon and AlgebraicGeometry Codes
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1999
"... Given an errorcorrecting code over strings of length n and an arbitrary input string also of length n, the list decoding problem is that of finding all codewords within a specified Hamming distance from the input string. We present an improved list decoding algorithm for decoding ReedSolomon codes ..."
Abstract

Cited by 345 (44 self)
 Add to MetaCart
Given an errorcorrecting code over strings of length n and an arbitrary input string also of length n, the list decoding problem is that of finding all codewords within a specified Hamming distance from the input string. We present an improved list decoding algorithm for decoding ReedSolomon codes. The list decoding problem for ReedSolomon codes reduces to the following "curvefitting" problem over a field F : Given n points f(x i :y i )g i=1 , x i
Decoding Reed Solomon Codes beyond the ErrorCorrection Bound
, 1997
"... We present a randomized algorithm which takes as input n distinct points f(xi; yi)g n i=1 from F \Theta F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in a ..."
Abstract

Cited by 274 (18 self)
 Add to MetaCart
(Show Context)
We present a randomized algorithm which takes as input n distinct points f(xi; yi)g n i=1 from F \Theta F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., yi = f (xi) for at least t values of i), provided t = \Omega (
Improved lowdegree testing and its applications
 IN 29TH STOC
, 1997
"... NP = PCP(log n, 1) and related results crucially depend upon the close connection betsveen the probability with which a function passes a low degree test and the distance of this function to the nearest degree d polynomial. In this paper we study a test proposed by Rubinfeld and Sudan [29]. The stro ..."
Abstract

Cited by 142 (17 self)
 Add to MetaCart
NP = PCP(log n, 1) and related results crucially depend upon the close connection betsveen the probability with which a function passes a low degree test and the distance of this function to the nearest degree d polynomial. In this paper we study a test proposed by Rubinfeld and Sudan [29]. The strongest previously known connection for this test states that a function passes the test with probability 6 for some d> 7/8 iff the function has agreement N 6 with a polynomial of degree d. We presenta new, and surprisingly strong,analysiswhich shows thatthepreceding statementis truefor 6<<0.5. The analysis uses a version of Hilbe?l irreducibility, a tool used in the factoring of multivariate polynomials. As a consequence we obtain an alternate construction for the following proof system: A constant prover lround proof system for NP languages in which the verifier uses O(log n) random bits, receives answers of size O(log n) bits, and has an error probability of at most 2 – 10g*‘’. Such a proof system, which implies the NPhardness of approximating Set Cover to within fl(log n) factors, has already been obtained by Raz and Safra [28]. Our result was completed after we heard of their claim. A second consequence of our analysis is a self testerlcorrector for any buggy program that (supposedly) computes a polynomial over a finite field. If the program is correct only on 6 fraction of inputs where 15<<0.5, then the tester/corrector determines J and generates 0(~) randomized programs, such that one of the programs is correct on every input, with high probability.
Pseudorandom generators without the XOR Lemma (Extended Abstract)
, 1998
"... Impagliazzo and Wigderson [IW97] have recently shown that if there exists a decision problem solvable in time 2 O(n) and having circuit complexity 2 n) (for all but finitely many n) then P = BPP. This result is a culmination of a series of works showing connections between the existence of har ..."
Abstract

Cited by 138 (23 self)
 Add to MetaCart
Impagliazzo and Wigderson [IW97] have recently shown that if there exists a decision problem solvable in time 2 O(n) and having circuit complexity 2 n) (for all but finitely many n) then P = BPP. This result is a culmination of a series of works showing connections between the existence of hard predicates and the existence of good pseudorandom generators. The construction of Impagliazzo and Wigderson goes through three phases of "hardness amplification" (a multivariate polynomial encoding, a first derandomized XOR Lemma, and a second derandomized XOR Lemma) that are composed with the Nisan Wigderson [NW94] generator. In this paper we present two different approaches to proving the main result of Impagliazzo and Wigderson. In developing each approach, we introduce new techniques and prove new results that could be useful in future improvements and/or applications of hardnessrandomness tradeoffs. Our first result is that when (a modified version of) the NisanWigderson generator construction is applied with a "mildly" hard predicate, the result is a generator that produces a distribution indistinguishable from having large minentropy. An extractor can then be used to produce a distribution computationally indistinguishable from uniform. This is the first construction of a pseudorandom generator that works with a mildly hard predicate without doing hardness amplification. We then show that in the ImpagliazzoWigderson construction only the first hardnessamplification phase (encoding with multivariate polynomial) is necessary, since it already gives the required averagecase hardness. We prove this result by (i) establishing a connection between the hardnessamplification problem and a listdecoding...
Extracting all the Randomness and Reducing the Error in Trevisan's Extractors
 In Proceedings of the 31st Annual ACM Symposium on Theory of Computing
, 1999
"... We give explicit constructions of extractors which work for a source of any minentropy on strings of length n. These extractors can extract any constant fraction of the minentropy using O(log² n) additional random bits, and can extract all the minentropy using O(log³ n) addition ..."
Abstract

Cited by 81 (14 self)
 Add to MetaCart
We give explicit constructions of extractors which work for a source of any minentropy on strings of length n. These extractors can extract any constant fraction of the minentropy using O(log&sup2; n) additional random bits, and can extract all the minentropy using O(log&sup3; n) additional random bits. Both of these constructions use fewer truly random bits than any previous construction which works for all minentropies and extracts a constant fraction of the minentropy. We then improve our second construction and show that we can reduce the entropy loss to 2 log(1=") +O(1) bits, while still using O(log&sup3; n) truly random bits (where entropy loss is defined as [(source minentropy) + (# truly random bits used) (# output bits)], and " is the statistical difference from uniform achieved). This entropy loss is optimal up to a constant additive term. our...
Some Applications of Coding Theory in Computational Complexity
, 2004
"... Errorcorrecting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locallytestable and locallydecodable errorcorrecting codes, and their applications to complexity theory ..."
Abstract

Cited by 65 (2 self)
 Add to MetaCart
(Show Context)
Errorcorrecting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locallytestable and locallydecodable errorcorrecting codes, and their applications to complexity theory and to cryptography.
List decoding algorithms for certain concatenated codes
 Proc. of the 32nd Annual ACM Symposium on Theory of Computing (STOC
, 2000
"... We give efficient (polynomialtime) listdecoding algorithms for certain families of errorcorrecting codes obtained by “concatenation”. Specifically, we give listdecoding algorithms for codes where the “outer code ” is a ReedSolomon or Algebraicgeometric code and the “inner code ” is a Hadamard ..."
Abstract

Cited by 58 (24 self)
 Add to MetaCart
We give efficient (polynomialtime) listdecoding algorithms for certain families of errorcorrecting codes obtained by “concatenation”. Specifically, we give listdecoding algorithms for codes where the “outer code ” is a ReedSolomon or Algebraicgeometric code and the “inner code ” is a Hadamard code. Codes obtained by such concatenation are the best known constructions of errorcorrecting codes with very large minimum distance. Our decoding algorithms enhance their nice combinatorial properties with algorithmic ones, by decoding these codes up to the currently known bound on their listdecoding “capacity”. In particular, the number of errors that we can correct matches (exactly) the number of errors for which it is known that the list size is bounded by a polynomial in the length of the codewords. 1
Reconstructing algebraic functions from mixed data. FOCS
, 1992
"... We consider the task of reconstructing algebraic functions given by black boxes. Unlike traditional settings, we are interested in black boxes which represent several algebraic functionsf1;:::;fk, where at each inputx, the box arbitrarily chooses a subset off1(x);:::;fk(x)to output. We show how to ..."
Abstract

Cited by 49 (11 self)
 Add to MetaCart
(Show Context)
We consider the task of reconstructing algebraic functions given by black boxes. Unlike traditional settings, we are interested in black boxes which represent several algebraic functionsf1;:::;fk, where at each inputx, the box arbitrarily chooses a subset off1(x);:::;fk(x)to output. We show how to reconstruct the functionsf1;:::;fkfrom the black box. This allows us to group the sample points into sets, such that for each set, all outputs to points in the set are from the same algebraic function. Our methods are robust in the presence of errors in the black box. Our model and techniques can be applied in the areas of computer vision, machine learning, curve fitting and polynomial approximation, selfcorrecting programs and bivariate polynomial factorization. 1