Results 1  10
of
579
Learning in graphical models
 STATISTICAL SCIENCE
, 2004
"... Statistical applications in fields such as bioinformatics, information retrieval, speech processing, image processing and communications often involve largescale models in which thousands or millions of random variables are linked in complex ways. Graphical models provide a general methodology for ..."
Abstract

Cited by 800 (10 self)
 Add to MetaCart
Statistical applications in fields such as bioinformatics, information retrieval, speech processing, image processing and communications often involve largescale models in which thousands or millions of random variables are linked in complex ways. Graphical models provide a general methodology for approaching these problems, and indeed many of the models developed by researchers in these applied fields are instances of the general graphical model formalism. We review some of the basic ideas underlying graphical models, including the algorithmic ideas that allow graphical models to be deployed in largescale data analysis problems. We also present examples of graphical models in bioinformatics, errorcontrol coding and language processing.
The Capacity of LowDensity ParityCheck Codes Under MessagePassing Decoding
, 2001
"... In this paper, we present a general method for determining the capacity of lowdensity paritycheck (LDPC) codes under messagepassing decoding when used over any binaryinput memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chos ..."
Abstract

Cited by 569 (9 self)
 Add to MetaCart
(Show Context)
In this paper, we present a general method for determining the capacity of lowdensity paritycheck (LDPC) codes under messagepassing decoding when used over any binaryinput memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. [1] in the case of a binarysymmetric channel and a binary messagepassing algorithm, is a general phenomenon. For the particularly important case of beliefpropagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to lowdensity paritycheck codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.
Efficient erasure correcting codes
 IEEE Transactions on Information Theory
, 2001
"... Abstract—We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discretetime random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on ..."
Abstract

Cited by 365 (27 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discretetime random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on both sides of the graph which is necessary and sufficient for the decoding process to finish successfully with high probability. By carefully designing these graphs we can construct for any given rate and any given real number a family of linear codes of rate which can be encoded in time proportional to ��@I A times their block length. Furthermore, a codeword can be recovered with high probability from a portion of its entries of length @IC A or more. The recovery algorithm also runs in time proportional to ��@I A. Our algorithms have been implemented and work well in practice; various implementation issues are discussed. Index Terms—Erasure channel, large deviation analysis, lowdensity paritycheck codes. I.
Lowdensity paritycheck codes based on finite geometries: A rediscovery and new results
 IEEE Trans. Inform. Theory
, 2001
"... This paper presents a geometric approach to the construction of lowdensity paritycheck (LDPC) codes. Four classes of LDPC codes are constructed based on the lines and points of Euclidean and projective geometries over finite fields. Codes of these four classes have good minimum distances and thei ..."
Abstract

Cited by 182 (7 self)
 Add to MetaCart
This paper presents a geometric approach to the construction of lowdensity paritycheck (LDPC) codes. Four classes of LDPC codes are constructed based on the lines and points of Euclidean and projective geometries over finite fields. Codes of these four classes have good minimum distances and their Tanner graphs have girth T. Finitegeometry LDPC codes can be decoded in various ways, ranging from low to high decoding complexity and from reasonably good to very good performance. They perform very well with iterative decoding. Furthermore, they can be put in either cyclic or quasicyclic form. Consequently, their encoding can be achieved in linear time and implemented with simple feedback shift registers. This advantage is not shared by other LDPC codes in general and is important in practice. Finitegeometry LDPC codes can be extended and shortened in various ways to obtain other good LDPC codes. Several techniques of extension and shortening are presented. Long extended finitegeometry LDPC codes have been constructed and they achieve a performance only a few tenths of a decibel away from the Shannon theoretical limit with iterative decoding.
Wireless informationtheoretic security  part I: Theoretical aspects
 IEEE Trans. on Information Theory
, 2006
"... In this twopart paper, we consider the transmission of confidential data over wireless wiretap channels. The first part presents an informationtheoretic problem formulation in which two legitimate partners communicate over a quasistatic fading channel and an eavesdropper observes their transmissi ..."
Abstract

Cited by 155 (12 self)
 Add to MetaCart
(Show Context)
In this twopart paper, we consider the transmission of confidential data over wireless wiretap channels. The first part presents an informationtheoretic problem formulation in which two legitimate partners communicate over a quasistatic fading channel and an eavesdropper observes their transmissions through another independent quasistatic fading channel. We define the secrecy capacity in terms of outage probability and provide a complete characterization of the maximum transmission rate at which the eavesdropper is unable to decode any information. In sharp contrast with known results for Gaussian wiretap channels (without feedback), our contribution shows that in the presence of fading informationtheoretic security is achievable even when the eavesdropper has a better average signaltonoise ratio (SNR) than the legitimate receiver — fading thus turns out to be a friend and not a foe. The issue of imperfect channel state information is also addressed. Practical schemes for wireless informationtheoretic security are presented in Part II, which in some cases comes close to the secrecy capacity limits given in this paper.
Bayesian compressive sensing via belief propagation
 IEEE Trans. Signal Processing
, 2010
"... Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can comple ..."
Abstract

Cited by 129 (19 self)
 Add to MetaCart
Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform approximate Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast encoding and decoding is provided using sparse encoding matrices, which also improve BP convergence by reducing the presence of loops in the graph. To decode a lengthN signal containing K large coefficients, our CSBP decoding algorithm uses O(K log(N)) measurements and O(N log 2 (N)) computation. Finally, sparse encoding matrices and the CSBP decoding algorithm can be modified to support a variety of signal models and measurement noise. 1
Capacity of MIMO systems with antenna selection
 in Proc. IEEE Int. Conf. Commun
, 2001
"... We consider the capacity of multipleinput multipleoutput systems with reduced complexity. One linkend uses all available antennas, while the other chooses the L out of N antennas that maximize capacity. We derive an upper bound on the capacity that can be expressed sa sthe sum of the logarithms o ..."
Abstract

Cited by 120 (15 self)
 Add to MetaCart
(Show Context)
We consider the capacity of multipleinput multipleoutput systems with reduced complexity. One linkend uses all available antennas, while the other chooses the L out of N antennas that maximize capacity. We derive an upper bound on the capacity that can be expressed sa sthe sum of the logarithms of ordered chisquaredistributed variables. This bound is then evaluated analytically and compared to the results obtained by Monte Carlo simulations. Our results show that the achieved capacity is close to the capacity of a fullcomplexity system provided that L is at least as large as the number of antennas at the other linkend. For example, for L=3, N=8 antennas at the receiver and three antennas at the transmitter, the capacity of the reducedcomplexity scheme is 20 bits/s/Hz compared to 23 bits/s/Hz of a fullcomplexity scheme. We also present a suboptimum antenna subset selection algorithm that has a complexity of N2 compared to eht optimum algorithm with a complexity of (N L).
Iterative multiuser joint decoding: unified framework and asymptotic analysis
 IEEE Trans. Inform. Theory
, 2002
"... Abstract—We present a framework for iterative multiuser joint decoding of codedivision multipleaccess (CDMA) signals, based on the factorgraph representation and on the sumproduct algorithm. In this framework, known parallel and serial, hard and soft interference cancellation algorithms are der ..."
Abstract

Cited by 119 (3 self)
 Add to MetaCart
Abstract—We present a framework for iterative multiuser joint decoding of codedivision multipleaccess (CDMA) signals, based on the factorgraph representation and on the sumproduct algorithm. In this framework, known parallel and serial, hard and soft interference cancellation algorithms are derived in a unified way. The asymptotic performance of these algorithms in the limit of large code block length can be rigorously analyzed by using density evolution. We show that, for random spreading in the largesystem limit, density evolution is considerably simplified. Moreover, by making a Gaussian approximation of the decoder soft output, we show that the behavior of iterative multiuser joint decoding is approximately characterized by the stable fixed points of a simple onedimensional nonlinear dynamical system. Index Terms—Density evolution, interference cancellation, iterative decoding, multiuser detection (MUD). I.