Results 11  20
of
487
Graphcover decoding and finitelength analysis of messagepassing iterative decoding of LDPC codes
 IEEE TRANS. INFORM. THEORY
, 2005
"... The goal of the present paper is the derivation of a framework for the finitelength analysis of messagepassing iterative decoding of lowdensity paritycheck codes. To this end we introduce the concept of graphcover decoding. Whereas in maximumlikelihood decoding all codewords in a code are comp ..."
Abstract

Cited by 114 (16 self)
 Add to MetaCart
(Show Context)
The goal of the present paper is the derivation of a framework for the finitelength analysis of messagepassing iterative decoding of lowdensity paritycheck codes. To this end we introduce the concept of graphcover decoding. Whereas in maximumlikelihood decoding all codewords in a code are competing to be the best explanation of the received vector, under graphcover decoding all codewords in all finite covers of a Tanner graph representation of the code are competing to be the best explanation. We are interested in graphcover decoding because it is a theoretical tool that can be used to show connections between linear programming decoding and messagepassing iterative decoding. Namely, on the one hand it turns out that graphcover decoding is essentially equivalent to linear programming decoding. On the other hand, because iterative, locally operating decoding algorithms like messagepassing iterative decoding cannot distinguish the underlying Tanner graph from any covering graph, graphcover decoding can serve as a model to explain the behavior of messagepassing iterative decoding. Understanding the behavior of graphcover decoding is tantamount to understanding
Analyzing the Turbo Decoder Using the Gaussian Approximation
 IEEE Trans. Inform. Theory
, 2001
"... In this paper, we introduce a simple technique for analyzing the iterative decoder that is broadly applicable to different classes of codes defined over graphs in certain fading as well as additive white Gaussian noise (AWGN) channels. The technique is based on the observation that the extrinsic inf ..."
Abstract

Cited by 90 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we introduce a simple technique for analyzing the iterative decoder that is broadly applicable to different classes of codes defined over graphs in certain fading as well as additive white Gaussian noise (AWGN) channels. The technique is based on the observation that the extrinsic information from constituent maximum a posteriori (MAP) decoders is well approximated by Gaussian random variables when the inputs to the decoders are Gaussian. The independent Gaussian model implies the existence of an iterative decoder threshold that statistically characterizes the convergence of the iterative decoder. Specifically, the iterative decoder converges to zero probability of error as the number of iterations increases if and only if the channel 0 exceeds the threshold. Despite the idealization of the model and the simplicity of the analysis technique, the predicted threshold values are in excellent agreement with the waterfall regions observed experimentally in the literature when the codeword lengths are large. Examples are given for parallel concatenated convolutional codes, serially concatenated convolutional codes, and the generalized lowdensity paritycheck (LDPC) codes of Gallager and ChengMcEliece. Convergencebased design of asymmetric parallel concatenated convolutional codes (PCCC) is also discussed.
Weaknesses of Margulis and RamanujanMargulis LowDensity ParityCheck Codes
 Electronic Notes in Theoretical Computer Science
, 2003
"... We report weaknesses in two algebraic constructions of lowdensity paritycheck codes based on expander graphs. The Margulis construction gives a code with nearcodewords, which cause problems for the sumproduct decoder; The RamanujanMargulis construction gives a code with lowweight codewords, whic ..."
Abstract

Cited by 88 (1 self)
 Add to MetaCart
(Show Context)
We report weaknesses in two algebraic constructions of lowdensity paritycheck codes based on expander graphs. The Margulis construction gives a code with nearcodewords, which cause problems for the sumproduct decoder; The RamanujanMargulis construction gives a code with lowweight codewords, which produce an errorfloor.
Analysis of Low Density Codes and Improved Designs Using Irregular Graphs
, 1998
"... In [6], Gallager introduces a family of codes based on sparse bipartite graphs, which he calls lowdensity paritycheck codes. He suggests a natural decoding algorithm for these codes, and proves a good bound on the fraction of errors that can be corrected. As the codes that Gallager builds are deri ..."
Abstract

Cited by 87 (12 self)
 Add to MetaCart
In [6], Gallager introduces a family of codes based on sparse bipartite graphs, which he calls lowdensity paritycheck codes. He suggests a natural decoding algorithm for these codes, and proves a good bound on the fraction of errors that can be corrected. As the codes that Gallager builds are derived from regular graphs, we refer to them as regular codes. Following the general approach introduced in [7] for the design and analysis of erasure codes, we consider errorcorrecting codes based on random irregular bipartite graphs, which we call irregular codes. We introduce tools based on linear programming for designing linear time irregular codes with better errorcorrecting capabilities than possible with regular codes. For example, the decoding algorithm for the rate 1/2 regular codes of Gallager can provably correct up to 5.17% errors asymptotically, whereas we have found irregular codes for which our decoding algorithm can provably correct up to 6.27% errors asymptotically. We incl...
EXIT charts of irregular codes
, 2002
"... We study the convergence behavior of iterative decoding of a serially concatenated code. We rederive a existing analysis technique called EXIT chart [15] and show that for certain decoders the construction of an EXIT chart simplifies tremendously. The findings are extended such that simple irregula ..."
Abstract

Cited by 83 (6 self)
 Add to MetaCart
We study the convergence behavior of iterative decoding of a serially concatenated code. We rederive a existing analysis technique called EXIT chart [15] and show that for certain decoders the construction of an EXIT chart simplifies tremendously. The findings are extended such that simple irregular codes can be constructed, which can be used to improve the converence of the iterative decoding algorithm significantly. An efficient and optimal optiamization algorithm is presented. Finally, some results on thresholds on the decoding convergence are outlined.
On the construction of some capacityapproaching coding schemes
, 2000
"... This thesis proposes two constructive methods of approaching the Shannon limit very closely. Interestingly, these two methods operate in opposite regions, one has a block length of one and the other has a block length approaching infinity. The first approach is based on novel memoryless joint source ..."
Abstract

Cited by 82 (2 self)
 Add to MetaCart
(Show Context)
This thesis proposes two constructive methods of approaching the Shannon limit very closely. Interestingly, these two methods operate in opposite regions, one has a block length of one and the other has a block length approaching infinity. The first approach is based on novel memoryless joint sourcechannel coding schemes. We first show some examples of sources and channels where no coding is optimal for all values of the signaltonoise ratio (SNR). When the source bandwidth is greater than the channel bandwidth, joint coding schemes based on spacefilling curves and other families of curves are proposed. For uniform sources and modulo channels, our coding scheme based on spacefilling curves operates within 1.1 dB of Shannon’s ratedistortion bound. For Gaussian sources and additive white Gaussian noise (AWGN) channels, we can achieve within 0.9 dB of the ratedistortion bound. The second scheme is based on lowdensity paritycheck (LDPC) codes. We first demonstrate that we can translate threshold values of an LDPC code between channels accurately using a simple mapping. We develop some models for density evolution
Evaluation of Gallager Codes for Short Block Length and High Rate Applications
 In Codes, Systems and Graphical Models
, 1999
"... Gallager codes with large block length and low rate (e.g., N ' 10; 00040; 000, R ' 0:250:5) have been shown to have record{breaking performance for low signal{ to{noise applications. In this paper we study Gallager codes at the other end of the spectrum. We rst explore the theoretical p ..."
Abstract

Cited by 81 (9 self)
 Add to MetaCart
Gallager codes with large block length and low rate (e.g., N ' 10; 00040; 000, R ' 0:250:5) have been shown to have record{breaking performance for low signal{ to{noise applications. In this paper we study Gallager codes at the other end of the spectrum. We rst explore the theoretical properties of binary Gallager codes with very high rates and observe that Gallager codes of any rate oer runlength{limiting properties at no additional cost. We then report the empirical performance of high rate binary and non{binary Gallager codes on three channels: the binary input Gaussian channel, the binary symmetric channel, and the 16{ary symmetric channel. We nd that Gallager codes with rate R = 8=9 and block length N = 1998 bits outperform comparable BCH and Reed{Solomon codes (decoded by a hard input decoder) by more than a decibel on the Gaussian channel. Please note this is a rough draft paper, not intended for widespread circulation. Updates to this paper will appear here: http://www....
Binary intersymbol interference channels: Gallager codes, density evolution and code performance bounds
 IEEE TRANS. INFORM. THEORY
, 2003
"... We study the limits of performance of Gallager codes (lowdensity paritycheck (LDPC) codes) over binary linear intersymbol interference (ISI) channels with additive white Gaussian noise (AWGN). Using the graph representations of the channel, the code, and the sum–product messagepassing detector/d ..."
Abstract

Cited by 69 (8 self)
 Add to MetaCart
(Show Context)
We study the limits of performance of Gallager codes (lowdensity paritycheck (LDPC) codes) over binary linear intersymbol interference (ISI) channels with additive white Gaussian noise (AWGN). Using the graph representations of the channel, the code, and the sum–product messagepassing detector/decoder, we prove two error concentration theorems. Our proofs expand on previous work by handling complications introduced by the channel memory. We circumvent these problems by considering not just linear Gallager codes but also their cosets and by distinguishing between different types of message flow neighborhoods depending on the actual transmitted symbols. We compute the noise tolerance threshold using a suitably developed density evolution algorithm and verify, by simulation, that the thresholds represent accurate predictions of the performance of the iterative sum–product algorithm for finite (but large) block lengths. We also demonstrate that for high rates, the thresholds are very close to the theoretical limit of performance for Gallager codes over ISI channels. If g denotes the capacity of a binary ISI channel and if g � � � denotes the maximal achievable mutual information rate when the channel inputs are independent and identically distributed (i.i.d.) binary random variables @g � � � gA, we prove that the maximum information rate achievable by the sum–product decoder of a Gallager (coset) code is upperbounded by g � � �. The last topic investigated is the performance limit of the decoder if the trellis portion of the sum–product algorithm is executed only once; this demonstrates the potential for trading off the computational requirements and the performance of the decoder.
Design of serially concatenated systems depending on the blocklength
 IEEE Transactions on Communications
, 2004
"... ..."
LH*RS  a highavailability scalable distributed data structure
"... (SDDS). An LH*RS file is hash partitioned over the distributed RAM of a multicomputer, e.g., a network of PCs, and supports the unavailability of any of its k ≥ 1 server nodes. The value of k transparently grows with the file to offset the reliability decline. Only the number of the storage nodes p ..."
Abstract

Cited by 59 (11 self)
 Add to MetaCart
(SDDS). An LH*RS file is hash partitioned over the distributed RAM of a multicomputer, e.g., a network of PCs, and supports the unavailability of any of its k ≥ 1 server nodes. The value of k transparently grows with the file to offset the reliability decline. Only the number of the storage nodes potentially limits the file growth. The highavailability management uses a novel parity calculus that we have developed, based on the ReedSalomon erasure correcting coding. The resulting parity storage overhead is about the minimal ever possible. The parity encoding and decoding are faster than for any other candidate coding we are aware of. We present our scheme and its performance analysis, including experiments with a prototype implementation on Wintel PCs. The capabilities of LH*RS offer new perspectives to data intensive applications, including the emerging ones of grids and of P2P computing.