• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

On the Minimum Distance of Parallel and Serially Concatenated Codes. [Online]. Available: http://lthcwww.epfl.ch/publications/index.php (1997)

by N Kahale, R Urbanke
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 44
Next 10 →

Coding theorems for turbo code ensembles

by Hui Jin, Robert J. Mceliece - IEEE Trans. Inf. Theory , 2002
"... Abstract—This paper is devoted to a Shannon-theoretic study of turbo codes. We prove that ensembles of parallel and serial turbo codes are “good ” in the following sense. For a turbo code ensemble defined by a fixed set of component codes (subject only to mild necessary restrictions), there exists a ..."
Abstract - Cited by 40 (0 self) - Add to MetaCart
Abstract—This paper is devoted to a Shannon-theoretic study of turbo codes. We prove that ensembles of parallel and serial turbo codes are “good ” in the following sense. For a turbo code ensemble defined by a fixed set of component codes (subject only to mild necessary restrictions), there exists a positive number 0 such that for any binary-input memoryless channel whose Bhattacharyya noise parameter is less than 0, the average maximum-likelihood (ML) decoder block error probability approaches zero, at least as fast as, where is the “interleaver gain ” exponent defined by Benedetto et al. in 1996. Index Terms—Bhattacharyya parameter, coding theorems, maximum-likelihood decoding (MLD), turbo codes, union bound. I.
(Show Context)

Citation Context

...results (Theorems 8.1 and 8.4) is that we are unable to compute for the and ensembles. Instead, we have had to resort to upper bounds on (see (6.8) and (7.8)), based on the work of Kahale and Urbanke =-=[21]-=-, which render our results existence theorems only. IV. MEMORYLESS BINARY-INPUT CHANNELS AND THE UNION BOUND Since turbo codes, as we have defined them, are binary codes, we consider using them on mem...

Upper Bound on the Minimum Distance of Turbo Codes Using a Combinatorial Approach

by Marco Breiling, Johannes Huber - IEEE Transactions on Communications , 2000
"... By using combinatorial considerations, we derive new upper bounds on the minimum Hamming distance, which Turbo codes can maximally attain with arbitrary -- including the best -- interleavers. The new bounds prove that by contrast to general linear binary channel codes, the minimum Hamming distance o ..."
Abstract - Cited by 27 (1 self) - Add to MetaCart
By using combinatorial considerations, we derive new upper bounds on the minimum Hamming distance, which Turbo codes can maximally attain with arbitrary -- including the best -- interleavers. The new bounds prove that by contrast to general linear binary channel codes, the minimum Hamming distance of Turbo codes cannot asymptotically grow stronger than the third root of the codeword length.

Decoding Turbo-Like Codes via Linear Programming

by Jon Feldman, et al.
"... We introduce a novel algorithm for decoding turbo-like codes based on linear programming. We prove that for the case of Repeat-Accumulate (RA) codes, under the binary symmetric channel with a certain constant threshold bound on the noise, the error probability of our algorithm is bounded by an inver ..."
Abstract - Cited by 26 (6 self) - Add to MetaCart
We introduce a novel algorithm for decoding turbo-like codes based on linear programming. We prove that for the case of Repeat-Accumulate (RA) codes, under the binary symmetric channel with a certain constant threshold bound on the noise, the error probability of our algorithm is bounded by an inverse polynomial in the code length. Our linear program (LP) minimizes the distance between the received bits and binary variables representing the code bits. Our LP is based on a representation of the code where code words are paths through a graph. Consequently, the LP bears a strong resemblance to the min-cost flow LP. The error bounds are based on an analysis of the probability, over the random noise of the channel, that the optimum solution to the LP is the path corresponding to the original transmitted code word.
(Show Context)

Citation Context

...One of the main goals of this research has been to explain the somewhat mysterious good performance of turbo codes and turbo-like codes. Even though the distances of turbolike codes are generally bad =-=[10, 2, 5]-=-, when decoded using an iterative decoder, they seem to achieve very good error rates [6]. The drawback to an iterative decoder is that it not £Research supported by NSF contract CCR-9624239 and a Dav...

The Minimum Distance of Turbo-Like Codes

by Louay Bazzi, Mohammad Mahdian, Daniel A. Spielman
"... Worst-case upper bounds are derived on the minimum distance of parallel concatenated Turbo codes, serially concatenated convolutional codes, repeat-accumulate codes, repeat-convolute codes, and generalizations of these codes obtained by allowing non-linear and large-memory constituent codes. It is s ..."
Abstract - Cited by 22 (0 self) - Add to MetaCart
Worst-case upper bounds are derived on the minimum distance of parallel concatenated Turbo codes, serially concatenated convolutional codes, repeat-accumulate codes, repeat-convolute codes, and generalizations of these codes obtained by allowing non-linear and large-memory constituent codes. It is shown that parallel-concatenated Turbo codes and repeat-convolute codes with sub-linear memory are asymptotically bad. It is also shown that depth-two serially concatenated codes with constant-memory outer codes and sub-linear-memory inner codes are asymptotically bad. Most of these upper bounds hold even when the convolutional encoders are replaced by general finite-state automata encoders. In contrast, it is proven that depth-three serially concatenated codes obtained by concatenating a repetition code with two accumulator codes through random permutations can be asymptotically good.
(Show Context)

Citation Context

... lengths. There are many variations of turbo codes, such as parallel concatenated convolutional codes, serial concatenated convolutional codes, and Repeat-and-Accumulate (RA) codes (See, for example, =-=[5, 6, 7]-=-). The major parameter that determines The performance of codes for large block lengths is their minimum distance. The minimum distance of a turbo code with a random interleaver is analyzed in [7], an...

An analysis of the block error probability performance of iterative decoding

by Michael Lentmaier, Dmitri V. Truhachev, Kamil Sh. Zigangirov, Daniel J. Costello - IEEE Transactions on Information Theory , 2005
"... Abstract—Asymptotic iterative decoding performance is ana-lyzed for several classes of iteratively decodable codes when the block length of the codes and the number of iterations go to infinity. Three classes of codes are considered. These are Gal-lager’s regular low-density parity-check (LDPC) code ..."
Abstract - Cited by 21 (4 self) - Add to MetaCart
Abstract—Asymptotic iterative decoding performance is ana-lyzed for several classes of iteratively decodable codes when the block length of the codes and the number of iterations go to infinity. Three classes of codes are considered. These are Gal-lager’s regular low-density parity-check (LDPC) codes, Tanner’s generalized LDPC (GLDPC) codes, and the turbo codes due to Berrou et al. It is proved that there exist codes in these classes and iterative decoding algorithms for these codes for which not only the bit error probability b, but also the block (frame) error probability B, goes to zero as and go to infinity. Index Terms—Belief propagation, block error probability, con-vergence analysis, density evolution, iterative decoding, low-den-sity parity-check (LDPC) codes, turbo codes. I.
(Show Context)

Citation Context

...t with the results for ML decoding given in [6], where the convergence to zero of the block error probability is also guaranteed only for multiple turbo codes with . Furthermore, it can also be shown =-=[44]-=- that the minimum distance of multiple turbo codes can increase faster with the block length than for , where is upper-bounded by [45]. VIII. CONCLUSION In this paper, considering LDPC codes, GLDPC co...

The Serial Concatenation of Rate-1 Codes through Uniform Random Interleavers

by Henry D. Pfister, Student Member, Paul H. Siegel , 2003
"... Until the analysis of Repeat Accumulate codes by Divsalar et al., few people would have guessed that simple rate-1 codes could play a crucial role in the construction of "good" binary codes. In this paper, we will construct "good" binary linear block codes at any rate 1 by seri ..."
Abstract - Cited by 20 (3 self) - Add to MetaCart
Until the analysis of Repeat Accumulate codes by Divsalar et al., few people would have guessed that simple rate-1 codes could play a crucial role in the construction of "good" binary codes. In this paper, we will construct "good" binary linear block codes at any rate 1 by serially concatenating an arbitrary outer code of rate with a large number of rate-1 inner codes through uniform random interleavers. We derive the average output weight enumerator (WE) for this ensemble in the limit as the number of inner codes goes to infinity. Using a probabilistic upper bound on the minimum distance, we prove that long codes from this ensemble will achieve the Gilbert--Varshamov bound with high probability. Numerical evaluation of the minimum distance shows that the asymptotic bound can be achieved with a small number of inner codes. In essence, this construction produces codes with good distance properties which are also compatible with iterative "turbo" style decoding. For selected codes, we also present bounds on the probability of maximum-likelihood decoding (MLD) error and simulation results for the probability of iterative decoding error.
(Show Context)

Citation Context

...inearly with block length and how close it is to the GB. For m = 1, it is known that the typical minimum distance grows O(n (d o -2)/d o ) where d o is the minimum distance of the repeated outer code =-=[8]-=-. Examining Figure 3 for m = 1, we see that the minimum distance grows slowly for R4 and H8 and not all for R2 and P8. While for m = 2, the minimum distance growth of R4, H8, and R2 8 0 200 400 600 80...

Quantum serial turbo-codes

by David Poulin, Jean-pierre Tillich, Harold Ollivier - IEEE Trans. Inf. Theory
"... Abstract — We present a theory of quantum serial turbo-codes, describe their iterative decoding algorithm, and study their performances numerically on a depolarization channel. Our construction offers several advantages over quantum LDPC codes. First, the Tanner graph used for decoding is free of 4- ..."
Abstract - Cited by 15 (3 self) - Add to MetaCart
Abstract — We present a theory of quantum serial turbo-codes, describe their iterative decoding algorithm, and study their performances numerically on a depolarization channel. Our construction offers several advantages over quantum LDPC codes. First, the Tanner graph used for decoding is free of 4-cycles that deteriorate the performances of iterative decoding. Secondly, the iterative decoder makes explicit use of the code’s degeneracy. Finally, there is complete freedom in the code design in terms of length, rate, memory size, and interleaver choice. We define a quantum analogue of a state diagram that provides an efficient way to verify the properties of a quantum convolutional code, and in particular its recursiveness and the presence of catastrophic error propagation. We prove that all recursive quantum convolutional encoder have catastrophic error propagation. In our constructions, the convolutional codes have thus been chosen to be non-catastrophic and non-recursive. While the resulting families of turbo-codes have bounded minimum distance, from a pragmatic point of view the effective minimum distances of the codes that we have simulated are large enough not to degrade the iterative decoding performance up to reasonable word error rates and block sizes. With well chosen constituent convolutional codes, we observe an important reduction of the word error rate as the code length increases. I.
(Show Context)

Citation Context

... great success of parallel and serial classical turbo-codes. In a serial concatenation scheme, an inner convolutional code that is recursive yields turbo-code families with unbounded minimum distance =-=[22]-=-, while non-catastrophic error propagation is necessary for iterative decoding convergence. The last point can be circumvented in several ways (by doping for instance, see [45]) and some of these tric...

New results on the minimum distance of repeat multiple accumulate codes

by Jörg Kliewer, Kamil S. Zigangirov, Daniel J. Costello - in Proc. 45th Annual Allerton Conf. Commun., Control, Computing , 2007
"... Abstract—In this paper we consider the ensemble of codes formed by a serial concatenation of a repetition code with multiple accumulators through uniform random interleavers. Based on finite length weight enumerators for these codes, asymptotic expressions for the minimum distance and an arbitrary n ..."
Abstract - Cited by 13 (7 self) - Add to MetaCart
Abstract—In this paper we consider the ensemble of codes formed by a serial concatenation of a repetition code with multiple accumulators through uniform random interleavers. Based on finite length weight enumerators for these codes, asymptotic expressions for the minimum distance and an arbitrary number of accumulators larger than one are derived. In accordance with earlier results in the literature, we first show that the minimum distance of RA codes can grow, at best, sublinearly with the block length. Then, for RAA codes and rates of 1/3 or smaller, it is proved that these codes exhibit linear distance growth with block length, where the gap to the Gilbert-Varshamov bound can be made arbitrarily small by increasing the number of accumulators beyond two. In order to address rates larger than 1/3, random puncturing of a low-rate mother code is introduced. We show that in this case the resulting ensemble of RAA codes asymptotically achieves linear distance growth close to the Gilbert-Varshamov bound. This holds even for very high rate codes. I.
(Show Context)

Citation Context

...um distance of RA codes, and we show that these codes cannot achieve linear distance growth with block length, i.e., they are asymptotically bad. Related results have already been established in [6], =-=[7]-=-, and [8], where lower and upper bounds on minimum distance for more general serially concatenated codes have been derived. In order to introduce the notation and for tutorial reasons we restate these...

Construction of turbo lattices

by Amin Sakzad, See Profile, Mohammad-reza Sadeghi, Daniel Panario - in “Proc. of 48th Ann. Allerton Conf. Commun. Control, and Computing , 2010
"... All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract - Cited by 9 (6 self) - Add to MetaCart
All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.
(Show Context)

Citation Context

...C only by setting up a good interleaver of size k and adjusting its size. The minimum distance of parallel concatenated codes with b parallel branches and recursive component codes grows like n b−2 b =-=[12]-=-. Also the average maximumlikelihood decoder block error probability approaches zero, at least as fast as n−b+2 [11]. Since increase in coding gain and decrease in normalized kissing number is complet...

An Upper Bound on the Minimum Distance of Serially Concatenated Convolutional Codes

by Alberto Perotti, Sergio Benedetto , 2004
"... This paper describes the derivation of an upper bound on the minimum distance of serially concatenated convolutional codes. The resulting expression shows that their minimum distance cannot grow more than approximately K 1-1/d , where K is the information word length, and d f is the free dist ..."
Abstract - Cited by 8 (0 self) - Add to MetaCart
This paper describes the derivation of an upper bound on the minimum distance of serially concatenated convolutional codes. The resulting expression shows that their minimum distance cannot grow more than approximately K 1-1/d , where K is the information word length, and d f is the free distance of the outer code. This result can also be applied to serial concatenations where the outer code is a general block code, and to rate k/n constituent encoders. The present upper bound is shown to agree with and, in some cases, improve over the previously known results.
(Show Context)

Citation Context

... linear block code in general. Moreover, it can be applied to rate k0/n0 general convolutional constituent codes. Results on the minimum distance of serially concatenated codes have been presented in =-=[3]-=- and [4]. Both papers show an exponential dependence of the minimum distance on the block length: in [3] the exponent depends on the minimum distance of the outer code, while, in [4], it depends on th...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University