Results 1 - 10
of
589
Learning in graphical models
- STATISTICAL SCIENCE
, 2004
"... Statistical applications in fields such as bioinformatics, information retrieval, speech processing, image processing and communications often involve large-scale models in which thousands or millions of random variables are linked in complex ways. Graphical models provide a general methodology for ..."
Abstract
-
Cited by 806 (10 self)
- Add to MetaCart
Statistical applications in fields such as bioinformatics, information retrieval, speech processing, image processing and communications often involve large-scale models in which thousands or millions of random variables are linked in complex ways. Graphical models provide a general methodology for approaching these problems, and indeed many of the models developed by researchers in these applied fields are instances of the general graphical model formalism. We review some of the basic ideas underlying graphical models, including the algorithmic ideas that allow graphical models to be deployed in large-scale data analysis problems. We also present examples of graphical models in bioinformatics, error-control coding and language processing.
The Capacity of Low-Density Parity-Check Codes Under Message-Passing Decoding
, 2001
"... In this paper, we present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chos ..."
Abstract
-
Cited by 574 (9 self)
- Add to MetaCart
(Show Context)
In this paper, we present a general method for determining the capacity of low-density parity-check (LDPC) codes under message-passing decoding when used over any binary-input memoryless channel with discrete or continuous output alphabets. Transmitting at rates below this capacity, a randomly chosen element of the given ensemble will achieve an arbitrarily small target probability of error with a probability that approaches one exponentially fast in the length of the code. (By concatenating with an appropriate outer code one can achieve a probability of error that approaches zero exponentially fast in the length of the code with arbitrarily small loss in rate.) Conversely, transmitting at rates above this capacity the probability of error is bounded away from zero by a strictly positive constant which is independent of the length of the code and of the number of iterations performed. Our results are based on the observation that the concentration of the performance of the decoder around its average performance, as observed by Luby et al. [1] in the case of a binary-symmetric channel and a binary message-passing algorithm, is a general phenomenon. For the particularly important case of belief-propagation decoders, we provide an effective algorithm to determine the corresponding capacity to any desired degree of accuracy. The ideas presented in this paper are broadly applicable and extensions of the general method to low-density parity-check codes over larger alphabets, turbo codes, and other concatenated coding schemes are outlined.
Efficient erasure correcting codes
- IEEE TRANSACTIONS ON INFORMATION THEORY
, 2001
"... We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discrete-time random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on both si ..."
Abstract
-
Cited by 360 (26 self)
- Add to MetaCart
(Show Context)
We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discrete-time random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on both sides of the graph which is necessary and sufficient for the decoding process to finish successfully with high probability. By carefully designing these graphs we can construct for any given rate and any given real number a family of linear codes of rate which can be encoded in time proportional to ��@I A times their block length. Furthermore, a codeword can be recovered with high probability from a portion of its entries of length @IC A or more. The recovery algorithm also runs in time proportional to ��@I A. Our algorithms have been implemented and work well in practice; various implementation issues are discussed.
Low-density parity-check codes based on finite geometries: A rediscovery and new results
- IEEE Trans. Inform. Theory
, 2001
"... This paper presents a geometric approach to the construction of low-density parity-check (LDPC) codes. Four classes of LDPC codes are constructed based on the lines and points of Euclidean and projective geometries over finite fields. Codes of these four classes have good minimum distances and thei ..."
Abstract
-
Cited by 186 (8 self)
- Add to MetaCart
This paper presents a geometric approach to the construction of low-density parity-check (LDPC) codes. Four classes of LDPC codes are constructed based on the lines and points of Euclidean and projective geometries over finite fields. Codes of these four classes have good minimum distances and their Tanner graphs have girth T. Finite-geometry LDPC codes can be decoded in various ways, ranging from low to high decoding complexity and from reasonably good to very good performance. They perform very well with iterative decoding. Furthermore, they can be put in either cyclic or quasi-cyclic form. Consequently, their encoding can be achieved in linear time and implemented with simple feedback shift registers. This advantage is not shared by other LDPC codes in general and is important in practice. Finite-geometry LDPC codes can be extended and shortened in various ways to obtain other good LDPC codes. Several techniques of extension and shortening are presented. Long extended finite-geometry LDPC codes have been constructed and they achieve a performance only a few tenths of a decibel away from the Shannon theoretical limit with iterative decoding.
Wireless information-theoretic security - part I: Theoretical aspects
- IEEE Trans. on Information Theory
, 2006
"... In this two-part paper, we consider the transmission of confidential data over wireless wiretap channels. The first part presents an information-theoretic problem formulation in which two legitimate partners communicate over a quasi-static fading channel and an eavesdropper observes their transmissi ..."
Abstract
-
Cited by 162 (12 self)
- Add to MetaCart
(Show Context)
In this two-part paper, we consider the transmission of confidential data over wireless wiretap channels. The first part presents an information-theoretic problem formulation in which two legitimate partners communicate over a quasi-static fading channel and an eavesdropper observes their transmissions through another independent quasi-static fading channel. We define the secrecy capacity in terms of outage probability and provide a complete characterization of the maximum transmission rate at which the eavesdropper is unable to decode any information. In sharp contrast with known results for Gaussian wiretap channels (without feedback), our contribution shows that in the presence of fading information-theoretic security is achievable even when the eavesdropper has a better average signal-to-noise ratio (SNR) than the legitimate receiver — fading thus turns out to be a friend and not a foe. The issue of imperfect channel state information is also addressed. Practical schemes for wireless information-theoretic security are presented in Part II, which in some cases comes close to the secrecy capacity limits given in this paper.
Capacity of MIMO systems with antenna selection
, 2005
"... We consider the capacity of multiple-input multiple-output systems with reduced complexity. One link-end uses all available antennas, while the other chooses the L out of N antennas that maximize capacity. We derive an upper bound on the capacity that can be expressed sa sthe sum of the logarithms o ..."
Abstract
-
Cited by 126 (14 self)
- Add to MetaCart
(Show Context)
We consider the capacity of multiple-input multiple-output systems with reduced complexity. One link-end uses all available antennas, while the other chooses the L out of N antennas that maximize capacity. We derive an upper bound on the capacity that can be expressed sa sthe sum of the logarithms of ordered chi-square-distributed variables. This bound is then evaluated analytically and compared to the results obtained by Monte Carlo simulations. Our results show that the achieved capacity is close to the capacity of a full-complexity system provided that L is at least as large as the number of antennas at the other link-end. For example, for L=3, N=8 antennas at the receiver and three antennas at the transmitter, the capacity of the reduced-complexity scheme is 20 bits/s/Hz compared to 23 bits/s/Hz of a full-complexity scheme. We also present a suboptimum antenna subset selection algorithm that has a complexity of N2 compared to eht optimum algorithm with a complexity of (N L).
Bayesian compressive sensing via belief propagation
- IEEE Trans. Signal Processing
, 2010
"... Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, sub-Nyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can comple ..."
Abstract
-
Cited by 125 (19 self)
- Add to MetaCart
Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, sub-Nyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform approximate Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast encoding and decoding is provided using sparse encoding matrices, which also improve BP convergence by reducing the presence of loops in the graph. To decode a length-N signal containing K large coefficients, our CS-BP decoding algorithm uses O(K log(N)) measurements and O(N log 2 (N)) computation. Finally, sparse encoding matrices and the CS-BP decoding algorithm can be modified to support a variety of signal models and measurement noise. 1