#### DMCA

## Turbo decoding as an instance of Pearl’s belief propagation algorithm (1998)

Venue: | IEEE Journal on Selected Areas in Communications |

Citations: | 404 - 16 self |

### Citations

5884 | A tutorial of hidden Markov models and selected applications in speech recognition
- Rabiner
- 1989
(Show Context)
Citation Context ...s in the presence of intersymbol interference [23]. It appeared explicitly as an algorithm for tracking the states of a Markov chain in the early 1970’s [40], [4] (see also the survey papers [47] and =-=[49]-=-). A similar algorithm (in “minsum” form) appeared in a 1971 paper on equalization [62]. The algorithm was connected to the optimization literature in 1987 [63]. All of this activity appears to have b... |

1776 | Near Shannon Limit Error-Correcting Coding and Decoding
- Berrou, Glavieux, et al.
- 1993
(Show Context)
Citation Context ...opagation, error-correcting codes, iterative decoding, Pearl’s Algorithm, probabilistic inference, turbo codes. I. INTRODUCTION AND SUMMARY TURBO codes, which were introduced in 1993 by Berrou et al. =-=[10]-=-, are the most exciting and potentially important development in coding theory in many years. Many of the structural properties of turbo codes have now been put on a firm theoretical footing [7], [18]... |

1653 |
Error bounds for convolutional codes and an asymptotically optimal decoding algorithm
- Viterbi
- 1967
(Show Context)
Citation Context ...f low-density parity check codes, and of Gallager’s iterative decoding algorithm. With hindsight, especially in view of the recent work of Wiberg [67], it is now evident that both Viterbi’s algorithm =-=[64]-=-, [23] and the BCJR algorithm [4] can be viewed as a kind of belief propagation. Indeed, Wiberg [66], [67] has generalized Gallager’s algorithm still further, to the point that it now resembles Pearl’... |

1593 |
Optimal decoding of linear codes for minimizing symbol error rate
- Bahl, Cocke, et al.
- 1974
(Show Context)
Citation Context ...RUARY 1998 One of the keys to the success of turbo codes is to use component codes and for which a low-complexity soft bit decision algorithm exists. For example, the BCJR or “APP” decoding algorithm =-=[4]-=- provides such an algorithm for any code, block or convolutional, that can be represented by a trellis. 4 As far as is known, a code with a low-complexity optimal decoding algorithm cannot achieve hig... |

1524 |
Local computations with probabilities on graphical structures and their applications to expert systems (with discussion),
- Lauritzen, Spiegelhalter
- 1988
(Show Context)
Citation Context ...s to have been completely independent of the developments in AI that led to Pearl’s algorithm! 8 There is an “exact” inference algorithm for an arbitrary DAG, developed by Lauritzen and Spiegelhalter =-=[34]-=-, which solves the inference problem with y@x™� t A computations, where x™ is the number of cliques in the undirected triangulated “moralized” graph q� which can be derived from qY and t is the maximu... |

1366 | Low-density parity-check codes,”
- Gallager
- 1962
(Show Context)
Citation Context ... [19]. Gallager’s Low-Density Parity-Check Codes: The earliest suboptimal iterative decoding algorithm is that of Gallager, who devised it as a method of decoding his “low-density parity-check” codes =-=[25]-=-, [26]. This algorithm was later generalized and elaborated upon by Tanner [61] and Wiberg [67]. But as MacKay and Neal [37]–[39] have pointed out, in the first citation of belief propagation by codin... |

1153 |
Introduction to Bayesian networks
- Jensen
- 1996
(Show Context)
Citation Context ... similarity between the Gallager–Tanner–Wiberg algorithm and Pearl’s algorithm, Aji and McEliece [1], [2], relying heavily on the post-Pearl improvements and simplifications in the BP algorithm [29], =-=[30]-=-, [52], [58], [59] have devised a simple algorithm for distributing information on a graph that is a simultaneous generalization of both algorithms, and which includes several other classic algorithms... |

750 | Good error-correcting codes based on very sparse matrices,
- MacKay
- 1999
(Show Context)
Citation Context ...alomon Brothers Inc., New York, NY 10048 USA. Publisher Item Identifier S 0733-8716(98)00170-X. 0733–8716/98$10.00 © 1998 IEEE paper that motivated this one, is that of MacKay and Neal [37]. See also =-=[38]-=- and [39].) In this paper, we will review the turbo decoding algorithm as originally expounded by Berrou et al. [10], but which was perhaps explained more lucidly in [3], [18], or [50]. We will then d... |

720 |
Computational complexity of probabilistic inference using bayesian belief networks (research note
- Cooper
- 1990
(Show Context)
Citation Context ...unknown random variables, which is required by the brute-force method. The efficiency of belief propagation on trees stands in sharp contrast to the situation for general DAG’s since, in 1990, Cooper =-=[16]-=- showed that the inference problem in general DAG’s is NP hard. (See also [17] and [53] for more on the NP hardness of probabilistic inference in Bayesian networks.) Since the network in Fig. 5 is a t... |

640 |
A recursive approach to low complexity codes,”
- Tanner
- 1981
(Show Context)
Citation Context ...tive decoding algorithm is that of Gallager, who devised it as a method of decoding his “low-density parity-check” codes [25], [26]. This algorithm was later generalized and elaborated upon by Tanner =-=[61]-=- and Wiberg [67]. But as MacKay and Neal [37]–[39] have pointed out, in the first citation of belief propagation by coding theorists, Gallager’s algorithm is a special kind of BP, with Fig. 10 as the ... |

611 |
Graphical Models in Applied Multivariate Statistics
- WHITTAKER
- 1990
(Show Context)
Citation Context ...the decoding problem. factors according to the graph of Fig. 4 if (4.4) A set of random variables whose density functions factor according to a given DAG is called a directed Markov field [35], [32], =-=[65]-=-. For example, if is a directed chain, then is an ordinary Markov chain. A DAG, together with the associated random variables is called a Bayesian belief network, orBayesian network for short [28]. At... |

610 | Iterative Decoding of Binary Block and Convolutional Codes”,
- Hagenauer, Papke
- 1996
(Show Context)
Citation Context ... exciting and potentially important development in coding theory in many years. Many of the structural properties of turbo codes have now been put on a firm theoretical footing [7], [18], [20], [21], =-=[27]-=-, [45], and several innovative variations on the turbo theme have appeared [5], [8], [9], [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation of why the turbo d... |

549 |
Statistical inference for probabilistic functions of finite state Markov chains. The annals of mathematical statistics,
- Baum, Petrie
- 1966
(Show Context)
Citation Context ...kward algorithm has a long and convoluted history that merits the attention of a science historian. It seems to have first appeared in the unclassified literature in two independent 1966 publications =-=[6]-=-, [11]. Soon afterwards, it appeared in papers on MAP detection of digital sequences in the presence of intersymbol interference [23]. It appeared explicitly as an algorithm for tracking the states of... |

500 | Near Shannon limit performance of low density parity check codes”,
- MacKay, Neal
- 1962
(Show Context)
Citation Context ...others Inc., New York, NY 10048 USA. Publisher Item Identifier S 0733-8716(98)00170-X. 0733–8716/98$10.00 © 1998 IEEE paper that motivated this one, is that of MacKay and Neal [37]. See also [38] and =-=[39]-=-.) In this paper, we will review the turbo decoding algorithm as originally expounded by Berrou et al. [10], but which was perhaps explained more lucidly in [3], [18], or [50]. We will then describe P... |

371 | Serial concatenation of interleaved codes: Performance analysis, design and iterative decoding.”
- Benedetto, Divsalar, et al.
- 1998
(Show Context)
Citation Context ...he structural properties of turbo codes have now been put on a firm theoretical footing [7], [18], [20], [21], [27], [45], and several innovative variations on the turbo theme have appeared [5], [8], =-=[9]-=-, [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation of why the turbo decoding algorithm performs as well as it does. While we cannot yet announce a solution t... |

365 | Learning Bayesian Networks :
- Heckerman, Chickering
- 1995
(Show Context)
Citation Context ...s, then computing the sum in (4.1) for each possible value of requires additions, which is impractical unless and the ’s are very small numbers. The idea behind the “Bayesian belief network” approach =-=[28]-=-, [51] to this inference problem is to exploit any “partial independencies” which may exist among the ’s to simplify belief updating. The simplest case of this is when the random variables are mutuall... |

340 | Expander codes,”
- Sipser, Spielman
- 1996
(Show Context)
Citation Context ...llager’s original decoding algorithm made with powerful modern computers show that their performance is remarkably good, in many cases rivaling that of turbo codes. More recently, Sipser and Spielman =-=[57]-=-, [60] have replaced the “random” parity-check martrices of Gallager and MacKay–Neal with deterministic parity-check matrices with desirable properties, based on “expander” graphs, and have obtained e... |

314 | Unveiling turbo codes: some results on parallel concatenated coding schemes,”
- Benedetto, Montorsi
- 1996
(Show Context)
Citation Context ... al. [10], are the most exciting and potentially important development in coding theory in many years. Many of the structural properties of turbo codes have now been put on a firm theoretical footing =-=[7]-=-, [18], [20], [21], [27], [45], and several innovative variations on the turbo theme have appeared [5], [8], [9], [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explan... |

292 | Approximating probabilistic inference in Bayesian belief networks is NP-hard
- Dagum, Luby
- 1993
(Show Context)
Citation Context ...iciency of belief propagation on trees stands in sharp contrast to the situation for general DAG’s since, in 1990, Cooper [16] showed that the inference problem in general DAG’s is NP hard. (See also =-=[17]-=- and [53] for more on the NP hardness of probabilistic inference in Bayesian networks.) Since the network in Fig. 5 is a tree, Pearl’s algorithm will apply. However, the result is uninteresting: Pearl... |

239 |
Concatenated Codes
- Forney
- 1966
(Show Context)
Citation Context ... codes used by McEliece and Cheng. Serially Concatenated Codes: We have defined a turbo code to be the parallel concatenation of two or more components codes. However, as originally defined by Forney =-=[22]-=-, concatenation is a serial operation. Recently, several researchers [8], [9] have investigated the performance of serially concatenated codes, with turbo-style decoding. This is a nontrivial variatio... |

217 |
Sequential updating of conditional probabilities on directed graphical structures.
- Spiegelhalter, Lauritzen
- 1990
(Show Context)
Citation Context ...between the Gallager–Tanner–Wiberg algorithm and Pearl’s algorithm, Aji and McEliece [1], [2], relying heavily on the post-Pearl improvements and simplifications in the BP algorithm [29], [30], [52], =-=[58]-=-, [59] have devised a simple algorithm for distributing information on a graph that is a simultaneous generalization of both algorithms, and which includes several other classic algorithms, including ... |

217 |
Bayesian analysis in expert systems
- Spiegelhalter, Dawid, et al.
- 1993
(Show Context)
Citation Context ...n the Gallager–Tanner–Wiberg algorithm and Pearl’s algorithm, Aji and McEliece [1], [2], relying heavily on the post-Pearl improvements and simplifications in the BP algorithm [29], [30], [52], [58], =-=[59]-=- have devised a simple algorithm for distributing information on a graph that is a simultaneous generalization of both algorithms, and which includes several other classic algorithms, including Viterb... |

197 |
Markov random fields and their applications,
- Kindermann, Snell
- 1980
(Show Context)
Citation Context ...on of the decoding problem. factors according to the graph of Fig. 4 if (4.4) A set of random variables whose density functions factor according to a given DAG is called a directed Markov field [35], =-=[32]-=-, [65]. For example, if is a directed chain, then is an ordinary Markov chain. A DAG, together with the associated random variables is called a Bayesian belief network, orBayesian network for short [2... |

193 | Probabilistic independence networks for hidden Markov probability models. - Smyth, Heckerman, et al. - 1996 |

177 |
Independence properties of directed Markov fields.
- Lauritzen, Dawid, et al.
- 1990
(Show Context)
Citation Context ...retation of the decoding problem. factors according to the graph of Fig. 4 if (4.4) A set of random variables whose density functions factor according to a given DAG is called a directed Markov field =-=[35]-=-, [32], [65]. For example, if is a directed chain, then is an ordinary Markov chain. A DAG, together with the associated random variables is called a Bayesian belief network, orBayesian network for sh... |

170 | Reverend bayes on inference engines: A distributed hierarchical approach
- Pearl
- 1982
(Show Context)
Citation Context ...able simplifications of the probabilistic inference problem. The most important of these simplifications, for our purposes, is Pearl’s belief propagation algorithm. In the 1980’s, Kim and Pearl [31], =-=[42]-=-–[44] showed that if the DAG is a “tree,” i.e., if there are no loops, 6 then there are efficient distributed algorithms for solving the inference problem. If all of the alphabets have the same size P... |

166 |
Probabilistic inference and influence diagrams”
- Shachter
- 1998
(Show Context)
Citation Context ...n computing the sum in (4.1) for each possible value of requires additions, which is impractical unless and the ’s are very small numbers. The idea behind the “Bayesian belief network” approach [28], =-=[51]-=- to this inference problem is to exploit any “partial independencies” which may exist among the ’s to simplify belief updating. The simplest case of this is when the random variables are mutually inde... |

153 |
Bayesian Updating in Recursive Graphical Models by Local Computations,
- Jensen
- 1989
(Show Context)
Citation Context ...ed the similarity between the Gallager–Tanner–Wiberg algorithm and Pearl’s algorithm, Aji and McEliece [1], [2], relying heavily on the post-Pearl improvements and simplifications in the BP algorithm =-=[29]-=-, [30], [52], [58], [59] have devised a simple algorithm for distributing information on a graph that is a simultaneous generalization of both algorithms, and which includes several other classic algo... |

142 |
Finding MAPs for belief networks is NP-hard.
- Shimony
- 1994
(Show Context)
Citation Context ...f belief propagation on trees stands in sharp contrast to the situation for general DAG’s since, in 1990, Cooper [16] showed that the inference problem in general DAG’s is NP hard. (See also [17] and =-=[53]-=- for more on the NP hardness of probabilistic inference in Bayesian networks.) Since the network in Fig. 5 is a tree, Pearl’s algorithm will apply. However, the result is uninteresting: Pearl’s algori... |

139 | Iterative decoding of compound codes by probability propagation in graphical models.
- Kschischang, Frey
- 1998
(Show Context)
Citation Context ...e general method for devising low-complexity iterative decoding algorithms for hybrid coded systems. This is the message of the paper. (A similar message is given in the paper by Kschischang and Frey =-=[33]-=- in this issue.) Here is an outline of the paper. In Section II, we derive some simple but important results about, and introduce some compact notation for, “optimal symbol decision” decoding algorith... |

122 |
Codes and iterative decoding on general graphs”,
- Wiberg, Loeliger, et al.
- 1995
(Show Context)
Citation Context ...pecially in view of the recent work of Wiberg [67], it is now evident that both Viterbi’s algorithm [64], [23] and the BCJR algorithm [4] can be viewed as a kind of belief propagation. Indeed, Wiberg =-=[66]-=-, [67] has generalized Gallager’s algorithm still further, to the point that it now resembles Pearl’s algorithm very closely. (In particular, Wiberg shows that his algorithm can be adapted to produce ... |

116 |
Probability propagation
- Shafer, Shenoy
- 1990
(Show Context)
Citation Context ...arity between the Gallager–Tanner–Wiberg algorithm and Pearl’s algorithm, Aji and McEliece [1], [2], relying heavily on the post-Pearl improvements and simplifications in the BP algorithm [29], [30], =-=[52]-=-, [58], [59] have devised a simple algorithm for distributing information on a graph that is a simultaneous generalization of both algorithms, and which includes several other classic algorithms, incl... |

112 | A distance spectrum interpretation of turbo codes,”
- Perez, Seghers, et al.
- 1996
(Show Context)
Citation Context ...ing and potentially important development in coding theory in many years. Many of the structural properties of turbo codes have now been put on a firm theoretical footing [7], [18], [20], [21], [27], =-=[45]-=-, and several innovative variations on the turbo theme have appeared [5], [8], [9], [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation of why the turbo decodin... |

95 | Good Codes based on Very Sparse Matrices,"
- MacKay, Neal
- 1995
(Show Context)
Citation Context ...Cheng is with Salomon Brothers Inc., New York, NY 10048 USA. Publisher Item Identifier S 0733-8716(98)00170-X. 0733–8716/98$10.00 © 1998 IEEE paper that motivated this one, is that of MacKay and Neal =-=[37]-=-. See also [38] and [39].) In this paper, we will review the turbo decoding algorithm as originally expounded by Berrou et al. [10], but which was perhaps explained more lucidly in [3], [18], or [50].... |

85 | Turbo codes for deep-space communications,”
- Divsalar, Pollara
- 1995
(Show Context)
Citation Context ...[10], are the most exciting and potentially important development in coding theory in many years. Many of the structural properties of turbo codes have now been put on a firm theoretical footing [7], =-=[18]-=-, [20], [21], [27], [45], and several innovative variations on the turbo theme have appeared [5], [8], [9], [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation ... |

85 |
Illuminating the Structure of Code and Decoder of Parallel Concatenated Recursive Systematic (Turbo) Codes,
- Robertson
- 1994
(Show Context)
Citation Context ... [37]. See also [38] and [39].) In this paper, we will review the turbo decoding algorithm as originally expounded by Berrou et al. [10], but which was perhaps explained more lucidly in [3], [18], or =-=[50]-=-. We will then describe Pearl’s algorithm, first in its natural “AI” setting, and then show that if it is applied to the “belief network” of a turbo code, the turbo decoding algorithm immediately resu... |

84 |
Hidden Markov models: A guided tour,” in
- Poritz
- 1990
(Show Context)
Citation Context ... sequences in the presence of intersymbol interference [23]. It appeared explicitly as an algorithm for tracking the states of a Markov chain in the early 1970’s [40], [4] (see also the survey papers =-=[47]-=- and [49]). A similar algorithm (in “minsum” form) appeared in a 1971 paper on equalization [62]. The algorithm was connected to the optimization literature in 1987 [63]. All of this activity appears ... |

65 |
A computational model for combined causal and diagnostic reasoning in inference systems.
- Kim, Pearl
- 1983
(Show Context)
Citation Context ...nsiderable simplifications of the probabilistic inference problem. The most important of these simplifications, for our purposes, is Pearl’s belief propagation algorithm. In the 1980’s, Kim and Pearl =-=[31]-=-, [42]–[44] showed that if the DAG is a “tree,” i.e., if there are no loops, 6 then there are efficient distributed algorithms for solving the inference problem. If all of the alphabets have the same ... |

62 | Transfer function bounds on the performance of turbo codes
- Divsalar, Do1inear, et al.
- 1995
(Show Context)
Citation Context ...are the most exciting and potentially important development in coding theory in many years. Many of the structural properties of turbo codes have now been put on a firm theoretical footing [7], [18], =-=[20]-=-, [21], [27], [45], and several innovative variations on the turbo theme have appeared [5], [8], [9], [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation of why... |

49 |
On receiver structures for channels having memory
- Chang, Hancock
- 1966
(Show Context)
Citation Context ... algorithm has a long and convoluted history that merits the attention of a science historian. It seems to have first appeared in the unclassified literature in two independent 1966 publications [6], =-=[11]-=-. Soon afterwards, it appeared in papers on MAP detection of digital sequences in the presence of intersymbol interference [23]. It appeared explicitly as an algorithm for tracking the states of a Mar... |

39 |
The Turbo decision algorithm.
- McEliece, Rodemich, et al.
- 1995
(Show Context)
Citation Context ...fined by (3.4) (3.5) The celebrated “turbo decoding algorithm” [10], [50], [3] is an iterative approximation to the optimal beliefs in (3.3) or (3.4), whose performance, while demonstrably suboptimal =-=[41]-=-, has nevertheless proved to be “nearly optimal” in an impressive array of experiments. The heart of the turbo algorithm is an iteratively defined sequence of product probability densities on defined ... |

39 | A connection between block and convolutional codes,
- Solomon, Tilborg
- 1979
(Show Context)
Citation Context ...s from turbo-style decoding, and we are currently investigating this phenomenon. “Tail-Biting” Convolutional Codes: The class of “tailbiting” convolutional codes introduced by Solomon and van Tilborg =-=[56]-=- is a natural candidate for BP decoding. Briefly, a tail-biting convolutional code is a block code formed by truncating the trellis of a conventional convolutional code and then pasting the ends of th... |

36 |
Near optimum decoding of product codes
- Pyndiah, Glavieux, et al.
- 1994
(Show Context)
Citation Context ...perties of turbo codes have now been put on a firm theoretical footing [7], [18], [20], [21], [27], [45], and several innovative variations on the turbo theme have appeared [5], [8], [9], [12], [27], =-=[48]-=-. What is still lacking, however, is a satisfactory theoretical explanation of why the turbo decoding algorithm performs as well as it does. While we cannot yet announce a solution to this problem, we... |

29 | propagation, and structuring in belief networks - Fusion - 1986 |

27 |
Abstract dynamic programming models under commutativity conditions.
- Verdu, Poor
- 1987
(Show Context)
Citation Context ...4] (see also the survey papers [47] and [49]). A similar algorithm (in “minsum” form) appeared in a 1971 paper on equalization [62]. The algorithm was connected to the optimization literature in 1987 =-=[63]-=-. All of this activity appears to have been completely independent of the developments in AI that led to Pearl’s algorithm! 8 There is an “exact” inference algorithm for an arbitrary DAG, developed by... |

22 |
M.A.P. bit decoding of convolutional codes
- Welch, Weber
(Show Context)
Citation Context ... in papers on MAP detection of digital sequences in the presence of intersymbol interference [23]. It appeared explicitly as an algorithm for tracking the states of a Markov chain in the early 1970’s =-=[40]-=-, [4] (see also the survey papers [47] and [49]). A similar algorithm (in “minsum” form) appeared in a 1971 paper on equalization [62]. The algorithm was connected to the optimization literature in 19... |

10 | Unit-memory Hamming turbo codes
- Cheng, McEliece
- 1995
(Show Context)
Citation Context ...ructural properties of turbo codes have now been put on a firm theoretical footing [7], [18], [20], [21], [27], [45], and several innovative variations on the turbo theme have appeared [5], [8], [9], =-=[12]-=-, [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation of why the turbo decoding algorithm performs as well as it does. While we cannot yet announce a solution to this... |

10 | A free energy minimization framework for inference problems in modulo 2 arithmetic
- MacKay
- 1995
(Show Context)
Citation Context ...rk for decoding systematic, low-density generator matrix codes. systematic linear block codes with low-density generator matrices [13]. (This same class of codes appeared earlier in a paper by MacKay =-=[36]-=- in a study of modulo-2 arithmetic inference problems, and in a paper by by Spielman [60] in connection with “error reduction.”) The decoding algorithm devised by Cheng and McEliece was adapted from t... |

9 |
A general algorithm for distributing information in a graph
- Aji, McEliece
- 1997
(Show Context)
Citation Context ...oduce both the Gallager–Tanner algorithm and the turbo decoding algorithm.) Finally, having noticed the similarity between the Gallager–Tanner–Wiberg algorithm and Pearl’s algorithm, Aji and McEliece =-=[1]-=-, [2], relying heavily on the post-Pearl improvements and simplifications in the BP algorithm [29], [30], [52], [58], [59] have devised a simple algorithm for distributing information on a graph that ... |

7 |
Viterbi algorithm
- “The
- 1973
(Show Context)
Citation Context ...ng convolutional code, illustrated for a truncation length of x a SX graph of a tail-biting code with good success, and functionally, these two approaches yield virtually identical algorithms. Forney =-=[24]-=- has also discussed the iterative decoding of tailbiting codes using the Tanner–Wiberg approach. VIII. CONCLUDING REMARKS We have shown that Pearl’s algorithm provides a systematic method for devising... |

7 | On the free distance of turbo codes and related product codes” Final Rep. Diploma project ss195,no 6613,Swiss Federal Institute of Technology
- Seghers
- 1995
(Show Context)
Citation Context ...s the inner (second) encoding, and is the noisy version of Product Codes: A number of researchers have been successful with turbo-style decoding of product codes in two or more dimensions [46], [48], =-=[54]-=-, [27]. In a product code, thes150 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 2, FEBRUARY 1998 Fig. 12. Belief network for decoding a pair of serially concatenated codes. informati... |

6 |
Interleaver design for three dimensional turbo codes,” in
- Barbulescu, Pietrobon
- 1995
(Show Context)
Citation Context ... Many of the structural properties of turbo codes have now been put on a firm theoretical footing [7], [18], [20], [21], [27], [45], and several innovative variations on the turbo theme have appeared =-=[5]-=-, [8], [9], [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation of why the turbo decoding algorithm performs as well as it does. While we cannot yet announce a ... |

6 |
Nonlinear equalization of binary signals in Gaussian noise
- Ungerboeck
- 1971
(Show Context)
Citation Context ...thm for tracking the states of a Markov chain in the early 1970’s [40], [4] (see also the survey papers [47] and [49]). A similar algorithm (in “minsum” form) appeared in a 1971 paper on equalization =-=[62]-=-. The algorithm was connected to the optimization literature in 1987 [63]. All of this activity appears to have been completely independent of the developments in AI that led to Pearl’s algorithm! 8 T... |

4 |
Viterbi algorithm,” Proc
- “The
- 1973
(Show Context)
Citation Context ...in the unclassified literature in two independent 1966 publications [6], [11]. Soon afterwards, it appeared in papers on MAP detection of digital sequences in the presence of intersymbol interference =-=[23]-=-. It appeared explicitly as an algorithm for tracking the states of a Markov chain in the early 1970’s [40], [4] (see also the survey papers [47] and [49]). A similar algorithm (in “minsum” form) appe... |

4 |
Codes and decoding on general graphs,” Linköping Studies in
- Wiberg
- 1996
(Show Context)
Citation Context ...gorithm is that of Gallager, who devised it as a method of decoding his “low-density parity-check” codes [25], [26]. This algorithm was later generalized and elaborated upon by Tanner [61] and Wiberg =-=[67]-=-. But as MacKay and Neal [37]–[39] have pointed out, in the first citation of belief propagation by coding theorists, Gallager’s algorithm is a special kind of BP, with Fig. 10 as the appropriate beli... |

3 | On the construction of efficient multilevel coded modulations,” submitted to the 1997
- Cheng
- 1997
(Show Context)
Citation Context ...coding algorithm devised by Cheng and McEliece was adapted from the one described in the MacKay–Neal paper cited above, and the results were quite good, especially at high rates. More recently, Cheng =-=[14]-=-, [15] used some of these same ideas to construct a class of block codes which yield some remarkably efficient multilevel coded modulations. Fig. 11 shows the belief network for low-density generator ... |

2 |
generalized distributive law
- “The
- 1997
(Show Context)
Citation Context ... both the Gallager–Tanner algorithm and the turbo decoding algorithm.) Finally, having noticed the similarity between the Gallager–Tanner–Wiberg algorithm and Pearl’s algorithm, Aji and McEliece [1], =-=[2]-=-, relying heavily on the post-Pearl improvements and simplifications in the BP algorithm [29], [30], [52], [58], [59] have devised a simple algorithm for distributing information on a graph that is a ... |

2 |
Performance of turbo decoded product codes used in multilevel coding
- Picart, Pyndiah
- 1996
(Show Context)
Citation Context ... encoding, is the inner (second) encoding, and is the noisy version of Product Codes: A number of researchers have been successful with turbo-style decoding of product codes in two or more dimensions =-=[46]-=-, [48], [54], [27]. In a product code, thes150 IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 2, FEBRUARY 1998 Fig. 12. Belief network for decoding a pair of serially concatenated code... |

1 |
The TURBO coding scheme,” unpublished manuscript distributed at 1994
- Andersen
- 1994
(Show Context)
Citation Context ...acKay and Neal [37]. See also [38] and [39].) In this paper, we will review the turbo decoding algorithm as originally expounded by Berrou et al. [10], but which was perhaps explained more lucidly in =-=[3]-=-, [18], or [50]. We will then describe Pearl’s algorithm, first in its natural “AI” setting, and then show that if it is applied to the “belief network” of a turbo code, the turbo decoding algorithm i... |

1 |
concatenation of block and convolutional codes,” Electron
- “Serial
- 1996
(Show Context)
Citation Context ... of the structural properties of turbo codes have now been put on a firm theoretical footing [7], [18], [20], [21], [27], [45], and several innovative variations on the turbo theme have appeared [5], =-=[8]-=-, [9], [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation of why the turbo decoding algorithm performs as well as it does. While we cannot yet announce a solut... |

1 |
capacity codecs for the Gaussian channel based on lowdensity generator matrices
- “Near
(Show Context)
Citation Context ...ding a Gallager “low-density parity-check” code. Fig. 11. Belief network for decoding systematic, low-density generator matrix codes. systematic linear block codes with low-density generator matrices =-=[13]-=-. (This same class of codes appeared earlier in a paper by MacKay [36] in a study of modulo-2 arithmetic inference problems, and in a paper by by Spielman [60] in connection with “error reduction.”) T... |

1 |
Effective free distance of turbo-codes,” Electron
- Divsalar, McEliece
- 1996
(Show Context)
Citation Context ...e most exciting and potentially important development in coding theory in many years. Many of the structural properties of turbo codes have now been put on a firm theoretical footing [7], [18], [20], =-=[21]-=-, [27], [45], and several innovative variations on the turbo theme have appeared [5], [8], [9], [12], [27], [48]. What is still lacking, however, is a satisfactory theoretical explanation of why the t... |

1 |
Linear-time encodable and decodable error-corecting codes
- Spielman
- 1996
(Show Context)
Citation Context ...’s original decoding algorithm made with powerful modern computers show that their performance is remarkably good, in many cases rivaling that of turbo codes. More recently, Sipser and Spielman [57], =-=[60]-=- have replaced the “random” parity-check martrices of Gallager and MacKay–Neal with deterministic parity-check matrices with desirable properties, based on “expander” graphs, and have obtained even st... |