Results 1 - 10
of
1,366
Factor Graphs and the Sum-Product Algorithm
- IEEE TRANSACTIONS ON INFORMATION THEORY
, 1998
"... A factor graph is a bipartite graph that expresses how a "global" function of many variables factors into a product of "local" functions. Factor graphs subsume many other graphical models including Bayesian networks, Markov random fields, and Tanner graphs. Following one simple c ..."
Abstract
-
Cited by 1791 (69 self)
- Add to MetaCart
A factor graph is a bipartite graph that expresses how a "global" function of many variables factors into a product of "local" functions. Factor graphs subsume many other graphical models including Bayesian networks, Markov random fields, and Tanner graphs. Following one simple computational rule, the sum-product algorithm operates in factor graphs to compute---either exactly or approximately---various marginal functions by distributed message-passing in the graph. A wide variety of algorithms developed in artificial intelligence, signal processing, and digital communications can be derived as specific instances of the sum-product algorithm, including the forward/backward algorithm, the Viterbi algorithm, the iterative "turbo" decoding algorithm, Pearl's belief propagation algorithm for Bayesian networks, the Kalman filter, and certain fast Fourier transform algorithms.
Learning in graphical models
- STATISTICAL SCIENCE
, 2004
"... Statistical applications in fields such as bioinformatics, information retrieval, speech processing, image processing and communications often involve large-scale models in which thousands or millions of random variables are linked in complex ways. Graphical models provide a general methodology for ..."
Abstract
-
Cited by 806 (10 self)
- Add to MetaCart
Statistical applications in fields such as bioinformatics, information retrieval, speech processing, image processing and communications often involve large-scale models in which thousands or millions of random variables are linked in complex ways. Graphical models provide a general methodology for approaching these problems, and indeed many of the models developed by researchers in these applied fields are instances of the general graphical model formalism. We review some of the basic ideas underlying graphical models, including the algorithmic ideas that allow graphical models to be deployed in large-scale data analysis problems. We also present examples of graphical models in bioinformatics, error-control coding and language processing.
Design of capacity-approaching irregular low-density parity-check codes
- IEEE TRANS. INFORM. THEORY
, 2001
"... We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on [1]. Assuming that the unde ..."
Abstract
-
Cited by 588 (6 self)
- Add to MetaCart
(Show Context)
We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on [1]. Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds.
The Capacity of Low-Density Parity-Check Codes Under Message-Passing Decoding
, 2001
"... In this paper, we present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chos ..."
Abstract
-
Cited by 574 (9 self)
- Add to MetaCart
(Show Context)
In this paper, we present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. [1] in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.
Turbo decoding as an instance of Pearl’s belief propagation algorithm
- IEEE Journal on Selected Areas in Communications
, 1998
"... Abstract—In this paper, we will describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. and an algorithm that has been well known in the artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pear ..."
Abstract
-
Cited by 404 (16 self)
- Add to MetaCart
(Show Context)
Abstract—In this paper, we will describe the close connection between the now celebrated iterative turbo decoding algorithm of Berrou et al. and an algorithm that has been well known in the artificial intelligence community for a decade, but which is relatively unknown to information theorists: Pearl’s belief propagation algorithm. We shall see that if Pearl’s algorithm is applied to the “belief network ” of a parallel concatenation of two or more codes, the turbo decoding algorithm immediately results. Unfortunately, however, this belief diagram has loops, and Pearl only proved that his algorithm works when there are no loops, so an explanation of the excellent experimental performance of turbo decoding is still lacking. However, we shall also show that Pearl’s algorithm can be used to routinely derive previously known iterative, but suboptimal, decoding algorithms for a number of other error-control systems, including Gallager’s
The generalized distributive law
- Information Theory, IEEE Transactions on
"... Abstract—In this semitutorial paper we discuss a general message passing algorithm, which we call the generalized dis-tributive law (GDL). The GDL is a synthesis of the work of many authors in the information theory, digital communications, signal processing, statistics, and artificial intelligence ..."
Abstract
-
Cited by 359 (2 self)
- Add to MetaCart
(Show Context)
Abstract—In this semitutorial paper we discuss a general message passing algorithm, which we call the generalized dis-tributive law (GDL). The GDL is a synthesis of the work of many authors in the information theory, digital communications, signal processing, statistics, and artificial intelligence communities. It includes as special cases the Baum–Welch algorithm, the fast Fourier transform (FFT) on any finite Abelian group, the Gal-lager–Tanner–Wiberg decoding algorithm, Viterbi’s algorithm, the BCJR algorithm, Pearl’s “belief propagation ” algorithm, the Shafer–Shenoy probability propagation algorithm, and the turbo decoding algorithm. Although this algorithm is guaranteed to give exact answers only in certain cases (the “junction tree ” condition), unfortunately not including the cases of GTW with cycles or turbo decoding, there is much experimental evidence, and a few theorems, suggesting that it often works approximately even when it is not supposed to. Index Terms—Belief propagation, distributive law, graphical models, junction trees, turbo codes. I.
On the design of low-density parity-check codes within 0.0045 dB of the Shannon limit
- IEEE COMMUNICATIONS LETTERS
, 2001
"... We develop improved algorithms to construct good low-density parity-check codes that approach the Shannon limit very closely. For rate 1/2, the best code found has a threshold within 0.0045 dB of the Shannon limit of the binary-input additive white Gaussian noise channel. Simulation results with a ..."
Abstract
-
Cited by 306 (6 self)
- Add to MetaCart
(Show Context)
We develop improved algorithms to construct good low-density parity-check codes that approach the Shannon limit very closely. For rate 1/2, the best code found has a threshold within 0.0045 dB of the Shannon limit of the binary-input additive white Gaussian noise channel. Simulation results with a somewhat simpler code show that we can achieve within 0.04 dB of the Shannon limit at a bit error rate of 10 T using a block length of 10 U.
Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation
- IEEE TRANS. INFORM. THEORY
, 2001
"... Density evolution is an algorithm for computing the capacity of low-density parity-check (LDPC) codes under messagepassing decoding. For memoryless binary-input continuous-output additive white Gaussian noise (AWGN) channels and sum-product decoders, we use a Gaussian approximation for message densi ..."
Abstract
-
Cited by 244 (2 self)
- Add to MetaCart
(Show Context)
Density evolution is an algorithm for computing the capacity of low-density parity-check (LDPC) codes under messagepassing decoding. For memoryless binary-input continuous-output additive white Gaussian noise (AWGN) channels and sum-product decoders, we use a Gaussian approximation for message densities under density evolution to simplify the analysis of the decoding algorithm. We convert the infinite-dimensional problem of iteratively calculating message densities, which is needed to find the exact threshold, to a one-dimensional problem of updating means of Gaussian densities. This simplification not only allows us to calculate the threshold quickly and to understand the behavior of the decoder better, but also makes it easier to design good irregular LDPC codes for AWGN channels. For various regular LDPC codes we have examined, thresholds can be estimated within 0.1 dB of the exact value. For rates between 0.5 and 0.9, codes designed using the Gaussian approximation perform within 0.02 dB of the best performing codes found so far by using density evolution when the maximum variable degree is IH. We show that by using the Gaussian approximation, we can visualize the sum-product decoding algorithm. We also show that the optimization of degree distributions can be understood and done graphically using the visualization.
Improved low-density parity-check codes using irregular graphs
- IEEE Trans. Inform. Theory
, 2001
"... Abstract—We construct new families of error-correcting codes based on Gallager’s low-density parity-check codes. We improve on Gallager’s results by introducing irregular parity-check matrices and a new rigorous analysis of hard-decision decoding of these codes. We also provide efficient methods for ..."
Abstract
-
Cited by 223 (15 self)
- Add to MetaCart
Abstract—We construct new families of error-correcting codes based on Gallager’s low-density parity-check codes. We improve on Gallager’s results by introducing irregular parity-check matrices and a new rigorous analysis of hard-decision decoding of these codes. We also provide efficient methods for finding good irregular structures for such decoding algorithms. Our rigorous analysis based on martingales, our methodology for constructing good irregular codes, and the demonstration that irregular structure improves performance constitute key points of our contribution. We also consider irregular codes under belief propagation. We report the results of experiments testing the efficacy of irregular codes on both binary-symmetric and Gaussian channels. For example, using belief propagation, for rate I R codes on 16 000 bits over a binary-symmetric channel, previous low-density parity-check codes can correct up to approximately 16 % errors, while our codes correct over 17%. In some cases our results come very close to reported results for turbo codes, suggesting that variations of irregular low density parity-check codes may be able to match or beat turbo code performance. Index Terms—Belief propagation, concentration theorem, Gallager codes, irregular codes, low-density parity-check codes.
The capacity of low-density parity check codes under message-passing decoding
- IEEE Trans. Inform. Theory
, 2001
"... ..."
(Show Context)