Results 1  10
of
112
Turbo decoding as an instance of Pearl’s belief propagation algorithm
 IEEE Journal on Selected Areas in Communications
, 1998
"... Abstract—In this paper, we will describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. and an algorithm that has been well known in the artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pear ..."
Abstract

Cited by 404 (16 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we will describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. and an algorithm that has been well known in the artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pearl’s belief propagation algorithm. We shall see that if Pearl’s algorithm is applied to the “belief network ” of a parallel concatenation of two or more codes, the turbo decoding algorithm immediately results. Unfortunately, however, this belief diagram has loops, and Pearl only proved that his algorithm works when there are no loops, so an explanation of the excellent experimental performance of turbo decoding is still lacking. However, we shall also show that Pearl’s algorithm can be used to routinely derive previously known iterative, but suboptimal, decoding algorithms for a number of other errorcontrol systems, including Gallager’s
Analyzing the Turbo Decoder Using the Gaussian Approximation
 IEEE Trans. Inform. Theory
, 2001
"... In this paper, we introduce a simple technique for analyzing the iterative decoder that is broadly applicable to different classes of codes defined over graphs in certain fading as well as additive white Gaussian noise (AWGN) channels. The technique is based on the observation that the extrinsic inf ..."
Abstract

Cited by 91 (0 self)
 Add to MetaCart
In this paper, we introduce a simple technique for analyzing the iterative decoder that is broadly applicable to different classes of codes defined over graphs in certain fading as well as additive white Gaussian noise (AWGN) channels. The technique is based on the observation that the extrinsic information from constituent maximum a posteriori (MAP) decoders is well approximated by Gaussian random variables when the inputs to the decoders are Gaussian. The independent Gaussian model implies the existence of an iterative decoder threshold that statistically characterizes the convergence of the iterative decoder. Specifically, the iterative decoder converges to zero probability of error as the number of iterations increases if and only if the channel 0 exceeds the threshold. Despite the idealization of the model and the simplicity of the analysis technique, the predicted threshold values are in excellent agreement with the waterfall regions observed experimentally in the literature when the codeword lengths are large. Examples are given for parallel concatenated convolutional codes, serially concatenated convolutional codes, and the generalized lowdensity paritycheck (LDPC) codes of Gallager and ChengMcEliece. Convergencebased design of asymmetric parallel concatenated convolutional codes (PCCC) is also discussed.
VLSI Architectures for Turbo Codes
 IEEE Transactions on VLSI Systems
, 1999
"... A great interest has been gained in recent years by a new errorcorrecting code technique, known as "turbo coding," which has been proven to offer performance closer to the Shannon's limit than traditional concatenated codes. In this paper, several very large scale integration (VLSI) ..."
Abstract

Cited by 64 (2 self)
 Add to MetaCart
(Show Context)
A great interest has been gained in recent years by a new errorcorrecting code technique, known as "turbo coding," which has been proven to offer performance closer to the Shannon's limit than traditional concatenated codes. In this paper, several very large scale integration (VLSI) architectures suitable for turbo decoder implementation are proposed and compared in terms of complexity and performance; the impact on the VLSI complexity of system parameters like the state number, number of iterations, and code rate are evaluated for the different solutions. The results of this architectural study have then been exploited for the design of a specific decoder, implementing a serial concatenation scheme with 2/3 and 3/4 codes; the designed circuit occupies 35 mm , supports a 2Mb/s data rate, and for a bit error probability of 10 06 , yields a coding gain larger than 7 dB, with ten iterations.
Comparative Study of Turbo Decoding Techniques: An Overview
, 2000
"... In this contribution, we provide an overview of the novel class of channel codes referred to as turbo codes, which have been shown to be capable of performing close to the Shannon Limit. We commence with a brief discussion on turbo encoding, and then move on to describing the form of the iterative d ..."
Abstract

Cited by 43 (2 self)
 Add to MetaCart
In this contribution, we provide an overview of the novel class of channel codes referred to as turbo codes, which have been shown to be capable of performing close to the Shannon Limit. We commence with a brief discussion on turbo encoding, and then move on to describing the form of the iterative decoder most commonly used to decode turbo codes. We then elaborate on various decoding algorithms that can be used in an iterative decoder, and give an example of the operation of such a decoder using the socalled Soft Output Viterbi Algorithm (SOVA). Lastly, the effect of a range of system parameters is investigated in a systematic fashion, in order to gauge their performance ramifications.
Improved Upper Bounds on the ML Decoding Error Probability of Parallel and Serial Concatenated Turbo Codes via Their Ensemble Distance Spectrum
 IEEE Trans. on Information Theory
, 2000
"... The ensemble performance of parallel and serial concatenated turbo codes is considered, where the ensemble is generated by a uniform choice of the interleaver and of the component codes taken from the set of time varying recursive systematic convolutional codes. Following the derivation of the input ..."
Abstract

Cited by 35 (4 self)
 Add to MetaCart
(Show Context)
The ensemble performance of parallel and serial concatenated turbo codes is considered, where the ensemble is generated by a uniform choice of the interleaver and of the component codes taken from the set of time varying recursive systematic convolutional codes. Following the derivation of the inputoutput weight enumeration functions of the ensembles of random parallel and serial concatenated turbo codes,the tangential sphere upper bound is employed to provide improved upper bounds on the block and bit error probabilities of these ensembles of codes for the binaryinput additive white Gaussian noise channel, based on coherent detection of equienergy antipodal signals and maximum likelihood decoding. The influence of the interleaver length and the memory length of the component codes are investigated. The improved bounding technique proposed here is compared to the conventional union bound and to a recent alternative bounding technique by Duman and Salehi which incorporates modified Gallager bounds. The advantage of the derived bounds is demonstrated for a variety of parallel and serial concatenated coding schemes with either fixed or random recursive systematic convolutional component codes, and it is especially pronounced in the region exceeding the cutoff rate, where the performance of turbo codes is most appealing. These upper bounds are also compared to simulation results of the iterative decoding algorithm. Keywords:
Analysis of lowdensity paritycheck codes for the GilbertElliott channel
 IEEE TRANS. INF. THEORY
, 2005
"... Density evolution analysis of lowdensity paritycheck (LDPC) codes in memoryless channels is extended to the Gilbert–Elliott (GE) channel, which is a special case of a large class of channels with hidden Markov memory. In a procedure referred to as estimation decoding, the sum–product algorithm (S ..."
Abstract

Cited by 34 (8 self)
 Add to MetaCart
Density evolution analysis of lowdensity paritycheck (LDPC) codes in memoryless channels is extended to the Gilbert–Elliott (GE) channel, which is a special case of a large class of channels with hidden Markov memory. In a procedure referred to as estimation decoding, the sum–product algorithm (SPA) is used to perform LDPC decoding jointly with channelstate detection. Density evolution results show (and simulation results confirm) that such decoders provide a significantly enlarged region of successful decoding within the GE parameter space, compared with decoders that do not exploit the channel memory. By considering a variety of ways in which a GE channel may be degraded, it is shown how knowledge of the decoding behavior at a single point of the GE parameter space may be extended to a larger region within the space, thereby mitigating the large complexity needed in using density evolution to explore the parameter space pointbypoint. Using the GE channel as a straightforward example, we conclude that analysis of estimation decoding for LDPC codes is feasible in channels with memory, and that such analysis shows large potential gains.
Interleaver design for turbo codes
 IEEE J. Select. Areas Commun
, 2001
"... The performance of a Turbo code with short block length depends critically on the interleaver design. There are two major criteria in the design of an interleaver: the distance spectrum of the code and the correlation between the information input data and the soft output of each decoder correspondi ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
(Show Context)
The performance of a Turbo code with short block length depends critically on the interleaver design. There are two major criteria in the design of an interleaver: the distance spectrum of the code and the correlation between the information input data and the soft output of each decoder corresponding to its parity bits. This paper describes a new interleaver design for Turbo codes with short block length based on these two criteria. A deterministic interleaver suitable for Turbo codes is also described. Simulation results compare the new interleaver design to different existing interleavers. 1
New Deterministic Interleaver Designs for Turbo Codes
 IEEE Trans. on Inform. Theory
"... It is well known that an interleaver with random properties, quite often generated by pseudorandom algorithms, is one of the essential building blocks of turbo codes. However, randomly generated interleavers have two major drawbacks: lack of an adequate analysis that guarantees their performance an ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
(Show Context)
It is well known that an interleaver with random properties, quite often generated by pseudorandom algorithms, is one of the essential building blocks of turbo codes. However, randomly generated interleavers have two major drawbacks: lack of an adequate analysis that guarantees their performance and lack of a compact representation that leads to a simple implementation. In this paper we present several new classes of deterministic interleavers of length , with construction complexity ( ), that permute a sequence of bits with nearly the same statistical distribution as a random interleaver and perform as well as or better than the average of a set of random interleavers. The new classes of deterministic interleavers have a very simple representation based on quadratic congruences and hence have a structure that allows the possibility of analysis as well as a straightforward implementation. Using the new interleavers, a turbo code of length 16384 that is only 0.7 dB away from capacy at a biterror rate (BER) of 10 5 is constructed. We also generalize the theory of previously known deterministic interleavers that are based on block interleavers, and we apply this theory to the construction of a nonrandom turbo code of length 16384 with a very regular structure whose performance is only 1.1 dB away from capacity at a BER of 10 5 .
Upper Bound on the Minimum Distance of Turbo Codes Using a Combinatorial Approach
 IEEE Transactions on Communications
, 2000
"... By using combinatorial considerations, we derive new upper bounds on the minimum Hamming distance, which Turbo codes can maximally attain with arbitrary  including the best  interleavers. The new bounds prove that by contrast to general linear binary channel codes, the minimum Hamming distance o ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
By using combinatorial considerations, we derive new upper bounds on the minimum Hamming distance, which Turbo codes can maximally attain with arbitrary  including the best  interleavers. The new bounds prove that by contrast to general linear binary channel codes, the minimum Hamming distance of Turbo codes cannot asymptotically grow stronger than the third root of the codeword length.