Results 1  10
of
44
Coding theorems for turbo code ensembles
 IEEE Trans. Inf. Theory
, 2002
"... Abstract—This paper is devoted to a Shannontheoretic study of turbo codes. We prove that ensembles of parallel and serial turbo codes are “good ” in the following sense. For a turbo code ensemble defined by a fixed set of component codes (subject only to mild necessary restrictions), there exists a ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
(Show Context)
Abstract—This paper is devoted to a Shannontheoretic study of turbo codes. We prove that ensembles of parallel and serial turbo codes are “good ” in the following sense. For a turbo code ensemble defined by a fixed set of component codes (subject only to mild necessary restrictions), there exists a positive number 0 such that for any binaryinput memoryless channel whose Bhattacharyya noise parameter is less than 0, the average maximumlikelihood (ML) decoder block error probability approaches zero, at least as fast as, where is the “interleaver gain ” exponent defined by Benedetto et al. in 1996. Index Terms—Bhattacharyya parameter, coding theorems, maximumlikelihood decoding (MLD), turbo codes, union bound. I.
Upper Bound on the Minimum Distance of Turbo Codes Using a Combinatorial Approach
 IEEE Transactions on Communications
, 2000
"... By using combinatorial considerations, we derive new upper bounds on the minimum Hamming distance, which Turbo codes can maximally attain with arbitrary  including the best  interleavers. The new bounds prove that by contrast to general linear binary channel codes, the minimum Hamming distance o ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
By using combinatorial considerations, we derive new upper bounds on the minimum Hamming distance, which Turbo codes can maximally attain with arbitrary  including the best  interleavers. The new bounds prove that by contrast to general linear binary channel codes, the minimum Hamming distance of Turbo codes cannot asymptotically grow stronger than the third root of the codeword length.
Decoding TurboLike Codes via Linear Programming
"... We introduce a novel algorithm for decoding turbolike codes based on linear programming. We prove that for the case of RepeatAccumulate (RA) codes, under the binary symmetric channel with a certain constant threshold bound on the noise, the error probability of our algorithm is bounded by an inver ..."
Abstract

Cited by 26 (6 self)
 Add to MetaCart
(Show Context)
We introduce a novel algorithm for decoding turbolike codes based on linear programming. We prove that for the case of RepeatAccumulate (RA) codes, under the binary symmetric channel with a certain constant threshold bound on the noise, the error probability of our algorithm is bounded by an inverse polynomial in the code length. Our linear program (LP) minimizes the distance between the received bits and binary variables representing the code bits. Our LP is based on a representation of the code where code words are paths through a graph. Consequently, the LP bears a strong resemblance to the mincost flow LP. The error bounds are based on an analysis of the probability, over the random noise of the channel, that the optimum solution to the LP is the path corresponding to the original transmitted code word.
The Minimum Distance of TurboLike Codes
"... Worstcase upper bounds are derived on the minimum distance of parallel concatenated Turbo codes, serially concatenated convolutional codes, repeataccumulate codes, repeatconvolute codes, and generalizations of these codes obtained by allowing nonlinear and largememory constituent codes. It is s ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
Worstcase upper bounds are derived on the minimum distance of parallel concatenated Turbo codes, serially concatenated convolutional codes, repeataccumulate codes, repeatconvolute codes, and generalizations of these codes obtained by allowing nonlinear and largememory constituent codes. It is shown that parallelconcatenated Turbo codes and repeatconvolute codes with sublinear memory are asymptotically bad. It is also shown that depthtwo serially concatenated codes with constantmemory outer codes and sublinearmemory inner codes are asymptotically bad. Most of these upper bounds hold even when the convolutional encoders are replaced by general finitestate automata encoders. In contrast, it is proven that depththree serially concatenated codes obtained by concatenating a repetition code with two accumulator codes through random permutations can be asymptotically good.
An analysis of the block error probability performance of iterative decoding
 IEEE Transactions on Information Theory
, 2005
"... Abstract—Asymptotic iterative decoding performance is analyzed for several classes of iteratively decodable codes when the block length of the codes and the number of iterations go to infinity. Three classes of codes are considered. These are Gallager’s regular lowdensity paritycheck (LDPC) code ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
(Show Context)
Abstract—Asymptotic iterative decoding performance is analyzed for several classes of iteratively decodable codes when the block length of the codes and the number of iterations go to infinity. Three classes of codes are considered. These are Gallager’s regular lowdensity paritycheck (LDPC) codes, Tanner’s generalized LDPC (GLDPC) codes, and the turbo codes due to Berrou et al. It is proved that there exist codes in these classes and iterative decoding algorithms for these codes for which not only the bit error probability b, but also the block (frame) error probability B, goes to zero as and go to infinity. Index Terms—Belief propagation, block error probability, convergence analysis, density evolution, iterative decoding, lowdensity paritycheck (LDPC) codes, turbo codes. I.
The Serial Concatenation of Rate1 Codes through Uniform Random Interleavers
, 2003
"... Until the analysis of Repeat Accumulate codes by Divsalar et al., few people would have guessed that simple rate1 codes could play a crucial role in the construction of "good" binary codes. In this paper, we will construct "good" binary linear block codes at any rate 1 by seri ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
(Show Context)
Until the analysis of Repeat Accumulate codes by Divsalar et al., few people would have guessed that simple rate1 codes could play a crucial role in the construction of "good" binary codes. In this paper, we will construct "good" binary linear block codes at any rate 1 by serially concatenating an arbitrary outer code of rate with a large number of rate1 inner codes through uniform random interleavers. We derive the average output weight enumerator (WE) for this ensemble in the limit as the number of inner codes goes to infinity. Using a probabilistic upper bound on the minimum distance, we prove that long codes from this ensemble will achieve the GilbertVarshamov bound with high probability. Numerical evaluation of the minimum distance shows that the asymptotic bound can be achieved with a small number of inner codes. In essence, this construction produces codes with good distance properties which are also compatible with iterative "turbo" style decoding. For selected codes, we also present bounds on the probability of maximumlikelihood decoding (MLD) error and simulation results for the probability of iterative decoding error.
Quantum serial turbocodes
 IEEE Trans. Inf. Theory
"... Abstract — We present a theory of quantum serial turbocodes, describe their iterative decoding algorithm, and study their performances numerically on a depolarization channel. Our construction offers several advantages over quantum LDPC codes. First, the Tanner graph used for decoding is free of 4 ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
Abstract — We present a theory of quantum serial turbocodes, describe their iterative decoding algorithm, and study their performances numerically on a depolarization channel. Our construction offers several advantages over quantum LDPC codes. First, the Tanner graph used for decoding is free of 4cycles that deteriorate the performances of iterative decoding. Secondly, the iterative decoder makes explicit use of the code’s degeneracy. Finally, there is complete freedom in the code design in terms of length, rate, memory size, and interleaver choice. We define a quantum analogue of a state diagram that provides an efficient way to verify the properties of a quantum convolutional code, and in particular its recursiveness and the presence of catastrophic error propagation. We prove that all recursive quantum convolutional encoder have catastrophic error propagation. In our constructions, the convolutional codes have thus been chosen to be noncatastrophic and nonrecursive. While the resulting families of turbocodes have bounded minimum distance, from a pragmatic point of view the effective minimum distances of the codes that we have simulated are large enough not to degrade the iterative decoding performance up to reasonable word error rates and block sizes. With well chosen constituent convolutional codes, we observe an important reduction of the word error rate as the code length increases. I.
New results on the minimum distance of repeat multiple accumulate codes
 in Proc. 45th Annual Allerton Conf. Commun., Control, Computing
, 2007
"... Abstract—In this paper we consider the ensemble of codes formed by a serial concatenation of a repetition code with multiple accumulators through uniform random interleavers. Based on finite length weight enumerators for these codes, asymptotic expressions for the minimum distance and an arbitrary n ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper we consider the ensemble of codes formed by a serial concatenation of a repetition code with multiple accumulators through uniform random interleavers. Based on finite length weight enumerators for these codes, asymptotic expressions for the minimum distance and an arbitrary number of accumulators larger than one are derived. In accordance with earlier results in the literature, we first show that the minimum distance of RA codes can grow, at best, sublinearly with the block length. Then, for RAA codes and rates of 1/3 or smaller, it is proved that these codes exhibit linear distance growth with block length, where the gap to the GilbertVarshamov bound can be made arbitrarily small by increasing the number of accumulators beyond two. In order to address rates larger than 1/3, random puncturing of a lowrate mother code is introduced. We show that in this case the resulting ensemble of RAA codes asymptotically achieves linear distance growth close to the GilbertVarshamov bound. This holds even for very high rate codes. I.
Construction of turbo lattices
 in “Proc. of 48th Ann. Allerton Conf. Commun. Control, and Computing
, 2010
"... All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
(Show Context)
All intext references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.
An Upper Bound on the Minimum Distance of Serially Concatenated Convolutional Codes
, 2004
"... This paper describes the derivation of an upper bound on the minimum distance of serially concatenated convolutional codes. The resulting expression shows that their minimum distance cannot grow more than approximately K 11/d , where K is the information word length, and d f is the free dist ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
This paper describes the derivation of an upper bound on the minimum distance of serially concatenated convolutional codes. The resulting expression shows that their minimum distance cannot grow more than approximately K 11/d , where K is the information word length, and d f is the free distance of the outer code. This result can also be applied to serial concatenations where the outer code is a general block code, and to rate k/n constituent encoders. The present upper bound is shown to agree with and, in some cases, improve over the previously known results.