Results 1  10
of
130
Improved Decoding of ReedSolomon and AlgebraicGeometry Codes
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1999
"... Given an errorcorrecting code over strings of length n and an arbitrary input string also of length n, the list decoding problem is that of finding all codewords within a specified Hamming distance from the input string. We present an improved list decoding algorithm for decoding ReedSolomon codes ..."
Abstract

Cited by 343 (42 self)
 Add to MetaCart
Given an errorcorrecting code over strings of length n and an arbitrary input string also of length n, the list decoding problem is that of finding all codewords within a specified Hamming distance from the input string. We present an improved list decoding algorithm for decoding ReedSolomon codes. The list decoding problem for ReedSolomon codes reduces to the following "curvefitting" problem over a field F : Given n points f(x i :y i )g i=1 , x i
Reliable Communication Under Channel Uncertainty
 IEEE TRANS. INFORM. THEORY
, 1998
"... In many communication situations, the transmitter and the receiver must be designed without a complete knowledge of the probability law governing the channel over which transmission takes place. Various models for such channels and their corresponding capacities are surveyed. Special emphasis is pla ..."
Abstract

Cited by 175 (5 self)
 Add to MetaCart
(Show Context)
In many communication situations, the transmitter and the receiver must be designed without a complete knowledge of the probability law governing the channel over which transmission takes place. Various models for such channels and their corresponding capacities are surveyed. Special emphasis is placed on the encoders and decoders which enable reliable communication over these channels.
Learning polynomials with queries: The highly noisy case
, 1995
"... Given a function f mapping nvariate inputs from a finite Kearns et. al. [21] (see also [27, 28, 22]). In the setting of agfieldFintoF, we consider the task of reconstructing a list nostic learning, the learner is to make no assumptions regarding of allnvariate degreedpolynomials which agree withf ..."
Abstract

Cited by 97 (18 self)
 Add to MetaCart
(Show Context)
Given a function f mapping nvariate inputs from a finite Kearns et. al. [21] (see also [27, 28, 22]). In the setting of agfieldFintoF, we consider the task of reconstructing a list nostic learning, the learner is to make no assumptions regarding of allnvariate degreedpolynomials which agree withfon a the natural phenomena underlying the input/output relationship tiny but nonnegligible fraction, , of the input space. We give a of the function, and the goal of the learner is to come up with a randomized algorithm for solving this task which accessesfas a simple explanation which best fits the examples. Therefore the black box and runs in time polynomial in1;nand exponential in best explanation may account for only part of the phenomena. d, provided is(pd=jFj). For the special case whend=1, In some situations, when the phenomena appears very irregular, we solve this problem for jFj>0. In this case the providing an explanation which fits only part of it is better than nothing. Interestingly, Kearns et. al. did not consider the use of running time of our algorithm is bounded by a polynomial queries (but rather examples drawn from an arbitrary distribuand exponential ind. Our algorithm generalizes a previously tion) as they were skeptical that queries could be of any help. known algorithm, due to Goldreich and Levin, that solves this We show that queries do seem to help (see below). task for the case whenF=GF(2)(andd=1).
Elimination of correlation in random codes for arbitrarily varying channels
 Geb
, 1978
"... a) the average error capacity and b) the maximal error capacity in case of randomized encoding. A formula for the average error capacity in case of randomized encoding was announced several years ago by Dobrushin ([3]). Under a mild regularity condition this formula turns out to be valid and follows ..."
Abstract

Cited by 80 (12 self)
 Add to MetaCart
a) the average error capacity and b) the maximal error capacity in case of randomized encoding. A formula for the average error capacity in case of randomized encoding was announced several years ago by Dobrushin ([3]). Under a mild regularity condition this formula turns out to be valid and follows as consequence from either a) or b). 1. The Channel Model and the Coding Problems Since several articles have been written on this subject we begin right away with the mathematical notions needed. The reader not sufficiently familiar with the concepts used will find some heuristic explanations in the last Section. Let 3 ~ and gO be finite sets, which serve as input and output alphabets of the channels described below. Let S be an arbitrary set, and let ~={w(. I. Is): s~S} be a set of stochastic 13~1 x Igolmatrices. For every s&quot;=(sl,...,sn)eS&quot;=[Is we 1 define transmission probabilities P(I. Is n) by P(y'lx&quot;ls&quot;) = ~I w(ytJx~lst) t=l n for all x&quot;=(xl..... x.)e~&quot;=I[~, 1 Y&quot;=Yl....,Y.)ego'=[Igo, and all n=l,2......
Some Applications of Coding Theory in Computational Complexity
, 2004
"... Errorcorrecting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locallytestable and locallydecodable errorcorrecting codes, and their applications to complexity theory ..."
Abstract

Cited by 69 (2 self)
 Add to MetaCart
(Show Context)
Errorcorrecting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locallytestable and locallydecodable errorcorrecting codes, and their applications to complexity theory and to cryptography.
List decoding algorithms for certain concatenated codes
 Proc. of the 32nd Annual ACM Symposium on Theory of Computing (STOC
, 2000
"... We give efficient (polynomialtime) listdecoding algorithms for certain families of errorcorrecting codes obtained by “concatenation”. Specifically, we give listdecoding algorithms for codes where the “outer code ” is a ReedSolomon or Algebraicgeometric code and the “inner code ” is a Hadamard ..."
Abstract

Cited by 57 (21 self)
 Add to MetaCart
We give efficient (polynomialtime) listdecoding algorithms for certain families of errorcorrecting codes obtained by “concatenation”. Specifically, we give listdecoding algorithms for codes where the “outer code ” is a ReedSolomon or Algebraicgeometric code and the “inner code ” is a Hadamard code. Codes obtained by such concatenation are the best known constructions of errorcorrecting codes with very large minimum distance. Our decoding algorithms enhance their nice combinatorial properties with algorithmic ones, by decoding these codes up to the currently known bound on their listdecoding “capacity”. In particular, the number of errors that we can correct matches (exactly) the number of errors for which it is known that the list size is bounded by a polynomial in the length of the codewords. 1
Extractor Codes
, 2001
"... We de ne new error correcting codes based on extractors. Weshow that for certain choices of parameters these codes have better list decoding properties than are known for other codes, and are provably better than ReedSolomon codes. We further show that codes with strong list decoding properties ar ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
(Show Context)
We de ne new error correcting codes based on extractors. Weshow that for certain choices of parameters these codes have better list decoding properties than are known for other codes, and are provably better than ReedSolomon codes. We further show that codes with strong list decoding properties are equivalent to slice extractors, a variant of extractors. Wegive an application of extractor codes to extracting many hardcore bits from a oneway function, using few auxiliary random bits. Finally,weshow that explicit slice extractors for certain other parameters would yield optimal bipartite Ramsey graphs.
Expanderbased constructions of efficiently decodable codes
 In Proceedings of the 42nd IEEE Symposium on Foundations of Computer Science
, 2001
"... We present several novel constructions of codes which share the common thread of using expander (or expanderlike) graphs as a component. The expanders enable the design of efficient decoding algorithms that correct a large number of errors through various forms of “voting ” procedures. We consider ..."
Abstract

Cited by 49 (18 self)
 Add to MetaCart
(Show Context)
We present several novel constructions of codes which share the common thread of using expander (or expanderlike) graphs as a component. The expanders enable the design of efficient decoding algorithms that correct a large number of errors through various forms of “voting ” procedures. We consider both the notions of unique and list decoding, and in all cases obtain asymptotically good codes which are decodable up to a “maximum” possible radius and either (a) achieve a similar rate as the previously best known codes but come with significantly faster algorithms, or (b) achieve a rate better than any prior construction with similar errorcorrection properties. Among our main results are: ¯ Codes of rate ª � over constantsized alphabet that can be list decoded in quadratic time from � errors. This matches the performance of the best algebraicgeometric (AG) codes, but with much faster encoding and decoding algorithms. ¯ Codes of rate ª � over constantsized alphabet that can be uniquely decoded from � � errors in nearlinear time (once again this matches AGcodes with much faster algorithms). This construction is similar to that of [1], and our decoding algorithm can be viewed as a positive resolution of their main open question. ¯ Lineartime encodable and decodable binary codes of positive rate 1 (in fact, rate ª � � ) that can correct up to � � � fraction errors. Note that this is the best errorcorrection one can hope for using unique decoding of binary codes. This significantly improves the fraction of errors corrected by the earlier lineartime codes of Spielman [19] and the lineartime decodable codes of [18, 22].
Fifty Years of Shannon Theory
, 1998
"... A brief chronicle is given of the historical development of the central problems in the theory of fundamental limits of data compression and reliable communication. ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
A brief chronicle is given of the historical development of the central problems in the theory of fundamental limits of data compression and reliable communication.