Results 1  10
of
53
On the Optimality of Solutions of the MaxProduct Belief Propagation Algorithm in Arbitrary Graphs
, 2001
"... Graphical models, suchasBayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. The maxproduct "belief propagation" algorithm is a localmessage passing algorithm on this graph that is known to converge to a unique fixed point when the gra ..."
Abstract

Cited by 241 (13 self)
 Add to MetaCart
Graphical models, suchasBayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. The maxproduct "belief propagation" algorithm is a localmessage passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tree. Furthermore, when the graph is a tree, the assignment based on the fixedpoint yields the most probable a posteriori (MAP) values of the unobserved variables given the observed ones. Recently, good
Correctness of Local Probability Propagation in Graphical Models with Loops
, 2000
"... This article analyzes the behavior of local propagation rules in graphical models with a loop. ..."
Abstract

Cited by 231 (8 self)
 Add to MetaCart
This article analyzes the behavior of local propagation rules in graphical models with a loop.
Belief Propagation and Revision in Networks with Loops
, 1997
"... Local belief propagation rules of the sort proposed by Pearl (1988) are guaranteed to converge to the optimal beliefs for singly connected networks. Recently, a number of researchers have empirically demonstrated good performance of these same algorithms on networks with loops, but a theoretical und ..."
Abstract

Cited by 77 (5 self)
 Add to MetaCart
Local belief propagation rules of the sort proposed by Pearl (1988) are guaranteed to converge to the optimal beliefs for singly connected networks. Recently, a number of researchers have empirically demonstrated good performance of these same algorithms on networks with loops, but a theoretical understanding of this performance has yet to be achieved. Here we lay a foundation for an understanding of belief propagation in networks with loops. For networks with a single loop, we derive an analytical relationship between the steady state beliefs in the loopy network and the true posterior probability. Using this relationship we show a category of networks for which the MAP estimate obtained by belief update and by belief revision can be proven to be optimal (although the beliefs will be incorrect). We show how nodes can use local information in the messages they receive in order to correct the steady state beliefs. Furthermore we prove that for all networks with a single loop, the MAP es...
Tree Consistency and Bounds on the Performance of the MaxProduct Algorithm and Its Generalizations
, 2002
"... Finding the maximum a posteriori (MAP) assignment of a discretestate distribution specified by a graphical model requires solving an integer program. The maxproduct algorithm, also known as the maxplus or minsum algorithm, is an iterative method for (approximately) solving such a problem on gr ..."
Abstract

Cited by 65 (5 self)
 Add to MetaCart
Finding the maximum a posteriori (MAP) assignment of a discretestate distribution specified by a graphical model requires solving an integer program. The maxproduct algorithm, also known as the maxplus or minsum algorithm, is an iterative method for (approximately) solving such a problem on graphs with cycles.
SoftInput SoftOutput Modules for the Construction and Distributed Iterative Decoding of Code Networks
, 1998
"... Softinput softoutput building blocks #modules# are presented to construct and iteratively decode in a distributed fashion code networks, a new concept that includes, and generalizes, various forms of concatenated coding schemes. Among the modules, a central role is played by the SISO module #and t ..."
Abstract

Cited by 59 (7 self)
 Add to MetaCart
Softinput softoutput building blocks #modules# are presented to construct and iteratively decode in a distributed fashion code networks, a new concept that includes, and generalizes, various forms of concatenated coding schemes. Among the modules, a central role is played by the SISO module #and the underlying algorithm#: it consists of a fourport device performing a processing of the sequences of two input probability distributions by constraining them to the code trellis structure. The SISO and other softinput softoutput modules are employed to construct and decode a variety of code networks, including "turbo codes" and serially concatenated codes with interleavers. Keywords Iterative decoding, turbo codes, serial concatenated codes, soft decoding algorithms. I. Introduction This paper concerns the construction and the distributed, iterative decoding of a conglomerate of codes that we call code networks, the name stemming from the complexity and richness of the poss...
Analysis, Design, and Iterative Decoding of Double Serially Concatenated Codes with Interleavers
 IEEE J. Select. Areas Commun
, 1998
"... A double serially concatenated code with two interleavers consists of the cascade of an outer encoder, an interleaver permuting the outer codeword bits, a middle encoder, another interleaver permuting the middle codeword bits, and an inner encoder whose input words are the permuted middle codewords. ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
(Show Context)
A double serially concatenated code with two interleavers consists of the cascade of an outer encoder, an interleaver permuting the outer codeword bits, a middle encoder, another interleaver permuting the middle codeword bits, and an inner encoder whose input words are the permuted middle codewords. The construction can be generalized to h cascaded encoders separated by h 0 1 interleavers, where h?3: We obtain upper bounds to the average maximum likelihood biterror probability of double serially concatenated block and convolutional coding schemes. Then, we derive design guidelines for the outer, middle, and inner codes that maximize the interleaver gain and the asymptotic slope of the error probability curves. Finally, we propose a lowcomplexity iterative decoding algorithm. Comparisons with parallel concatenated convolutional codes, known as "turbo codes," and with the recently proposed serially concatenated convolutional codes are also presented, showing that in some cases, t...
Hybrid concatenated codes and iterative decoding
 in Proc. ISIT’97
, 1997
"... A hybrid concatenated code with two interleavers is the parallel concatenation of an encoder, which accepts the permuted version of the information sequence as its ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
(Show Context)
A hybrid concatenated code with two interleavers is the parallel concatenation of an encoder, which accepts the permuted version of the information sequence as its
On the fixed points of the maxproduct algorithm
, 2000
"... Graphical models, such as Bayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. The maxproduct "belief propagation" algorithm is a localmessage passing algorithm on this graph that is known to converge to a unique fixed point when the g ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Graphical models, such as Bayesian networks and Markov random fields, represent statistical dependencies of variables by a graph. The maxproduct "belief propagation" algorithm is a localmessage passing algorithm on this graph that is known to converge to a unique fixed point when the graph is a tree. Furthermore, when the graph is a tree, the assignment based on the fixedpoint yields the most probable a posteriori #MAP# values of the unobserved variables given the observed ones.
Early Detection and Trellis Splicing: ReducedComplexity Iterative Decoding
 IEEE Journal on Selected Areas in Communications
, 1998
"... The excellent bit error rate performance of new soft iterative decoding algorithms (eg., turbocodes) is achieved at the expense of a computationally burdensome iterative decoding procedure. In this paper, we present a new method called early detection that can be used to reduce the computational co ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
(Show Context)
The excellent bit error rate performance of new soft iterative decoding algorithms (eg., turbocodes) is achieved at the expense of a computationally burdensome iterative decoding procedure. In this paper, we present a new method called early detection that can be used to reduce the computational complexity ofavariety of soft iterative decoding methods. Using a con dence criterion, some information symbols, state variables and codeword symbols are detected early on in the iterative decoding procedure, leading to a reduction in the computational complexity of further processing. After presenting a general Markov random eld framework (eg., theTanner graph) for compound codes, we use this framework to show howearly detection leads to computational savings. We thenpresentan easily implemented instance of this algorithm, called trellis splicing, that can be used with turbocodes. For a simulated turbocode system, at low BERs we obtain a reduction in computational complexity ofover a factor of four relative to conventional turbodecoding, without any increase in BER. 1
Serial Turbo trellis coded modulation with rate1 inner code
 in Proc. Intern. Symp. on Information Theory
, 2000
"... The main objective of this paper is to develop new, low complexity turbo codes suitable for bandwidth and power limited systems, for very low bit and word error rate requirements. Motivated by the structure of recently discovered low complexity codes such as RepeatAccumulate (RA) codes with low den ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
The main objective of this paper is to develop new, low complexity turbo codes suitable for bandwidth and power limited systems, for very low bit and word error rate requirements. Motivated by the structure of recently discovered low complexity codes such as RepeatAccumulate (RA) codes with low density parity check matrix, we extend the structure to highlevel modulations such as SPSK, and 16QAM. The structure consists of a simple 4state convolutional or short block code as an outer code, and a rate1, 2 or 4state inner code. The inner code and the mapping are jointly optimized based on maximizing the effective free Euclidean distance of the inner TCM. 1