Results 1  10
of
187
Regular and Irregular Progressive EdgeGrowth Tanner Graphs
 IEEE TRANS. INFORM. THEORY
, 2003
"... We propose a general method for constructing Tanner graphs having a large girth by progressively establishing edges or connections between symbol and check nodes in an edgebyedge manner, called progressive edgegrowth (PEG) construction. Lower bounds on the girth of PEG Tanner graphs and on the mi ..."
Abstract

Cited by 193 (0 self)
 Add to MetaCart
We propose a general method for constructing Tanner graphs having a large girth by progressively establishing edges or connections between symbol and check nodes in an edgebyedge manner, called progressive edgegrowth (PEG) construction. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting lowdensity paritycheck (LDPC) codes are derived in terms of parameters of the graphs. The PEG construction attains essentially the same girth as Gallager's explicit construction for regular graphs, both of which meet or exceed the ErdosSachs bound. Asymptotic analysis of a relaxed version of the PEG construction is presented. We describe an empirical approach using a variant of the "downhill simplex" search algorithm to design irregular PEG graphs for short codes with fewer than a thousand of bits, complementing the design approach of "density evolution" for larger codes. Encoding of LDPC codes based on the PEG construction is also investigated. We show how to exploit the PEG principle to obtain LDPC codes that allow linear time encoding. We also investigate regular and irregular LDPC codes using PEG Tanner graphs but allowing the symbol nodes to take values over GF(q), q > 2. Analysis and simulation demonstrate that one can obtain better performance with increasing field size, which contrasts with previous observations.
Graphcover decoding and finitelength analysis of messagepassing iterative decoding of LDPC codes
 IEEE TRANS. INFORM. THEORY
, 2005
"... The goal of the present paper is the derivation of a framework for the finitelength analysis of messagepassing iterative decoding of lowdensity paritycheck codes. To this end we introduce the concept of graphcover decoding. Whereas in maximumlikelihood decoding all codewords in a code are comp ..."
Abstract

Cited by 116 (17 self)
 Add to MetaCart
(Show Context)
The goal of the present paper is the derivation of a framework for the finitelength analysis of messagepassing iterative decoding of lowdensity paritycheck codes. To this end we introduce the concept of graphcover decoding. Whereas in maximumlikelihood decoding all codewords in a code are competing to be the best explanation of the received vector, under graphcover decoding all codewords in all finite covers of a Tanner graph representation of the code are competing to be the best explanation. We are interested in graphcover decoding because it is a theoretical tool that can be used to show connections between linear programming decoding and messagepassing iterative decoding. Namely, on the one hand it turns out that graphcover decoding is essentially equivalent to linear programming decoding. On the other hand, because iterative, locally operating decoding algorithms like messagepassing iterative decoding cannot distinguish the underlying Tanner graph from any covering graph, graphcover decoding can serve as a model to explain the behavior of messagepassing iterative decoding. Understanding the behavior of graphcover decoding is tantamount to understanding
ReducedComplexity Decoding of LDPC Codes
, 2005
"... Various loglikelihoodratiobased beliefpropagation (LLRBP) decoding algorithms and their reducedcomplexity derivatives for lowdensity paritycheck (LDPC) codes are presented. Numerically accurate representations of the checknode update computation used in LLRBP decoding are described. Furthe ..."
Abstract

Cited by 102 (2 self)
 Add to MetaCart
Various loglikelihoodratiobased beliefpropagation (LLRBP) decoding algorithms and their reducedcomplexity derivatives for lowdensity paritycheck (LDPC) codes are presented. Numerically accurate representations of the checknode update computation used in LLRBP decoding are described. Furthermore, approximate representations of the decoding computations are shown to achieve a reduction in complexity by simplifying the checknode update, or symbolnode update, or both. In particular, two main approaches for simplified checknode updates are presented that are based on the socalled minsum approximation coupled with either a normalization term or an additive offset term. Density evolution is used to analyze the performance of these decoding algorithms, to determine the optimum values of the key parameters, and to evaluate finite quantization effects. Simulation results show that these reducedcomplexity decoding algorithms for LDPC codes achieve a performance very close to that of the BP algorithm. The unified treatment of decoding techniques for LDPC codes presented here provides flexibility in selecting the appropriate scheme from performance, latency, computationalcomplexity, and memoryrequirement perspectives.
Accumulaterepeataccumulate codes
 Golobal Telecommunications Conference, 2004. GLOBECOM, ’04, IEEE
, 2004
"... Abstract—In this paper, we propose an innovative channel coding scheme called accumulaterepeataccumulate (ARA) codes. This class of codes can be viewed as serial turbolike codes or as a subclass of lowdensity parity check (LDPC) codes, and they have a projected graph or protograph representation ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, we propose an innovative channel coding scheme called accumulaterepeataccumulate (ARA) codes. This class of codes can be viewed as serial turbolike codes or as a subclass of lowdensity parity check (LDPC) codes, and they have a projected graph or protograph representation; this allows for highspeed iterative decoding implementation using belief propagation. An ARA code can be viewed as precoded repeat accumulate (RA) code with puncturing or as precoded irregular repeat accumulate (IRA) code, where simply an accumulator is chosen as the precoder. The amount of performance improvement due to the precoder will be called precoding gain. Using density evolution on their associated protographs, we find some rate1/2 ARA codes, with a maximum variable node degree of 5 for which a minimum bit SNR as low as 0.08 dB from channel capacity threshold is achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA, IRA, or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore, by puncturing the inner accumulator, we can construct families of higher rate ARA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results are provided and compared with turbo codes. In addition to iterative decoding analysis, we analyzed the performance of ARA codes with maximumlikelihood (ML) decoding. By obtaining the weight distribution of these codes and through existing tightest bounds we have shown that the ML SNR threshold of ARA codes also approaches very closely to that of random codes. These codes have better interleaving gain than turbo codes. Index Terms—Error bounds, graphs, lowdensity paritycheck (LDPC) codes, thresholds, turbolike codes, weight distribution.
Explicit construction of families of LDPC codes of girth at least six,”
 Proc. 40th Allerton Conf. on Communication, Control and Computing,
"... Abstract LDPC codes are serious contenders to Turbo codes in terms of decoding performance. One of the main problems is to give an explicit construction of such codes whose Tanner graphs have known girth. For a prime power q and m ≥ 2, Lazebnik and Ustimenko construct a qregular bipartite graph D( ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
(Show Context)
Abstract LDPC codes are serious contenders to Turbo codes in terms of decoding performance. One of the main problems is to give an explicit construction of such codes whose Tanner graphs have known girth. For a prime power q and m ≥ 2, Lazebnik and Ustimenko construct a qregular bipartite graph D(m, q) on 2q m vertices, which has girth at least 2 m/2 + 4. We regard these graphs as Tanner graphs of binary codes LU(m, q). We can determine the dimension and minimum weight of LU(2, q), and show that the weight of its minimum stopping set is at least q + 2 for q odd and exactly q + 2 for q even. We know that D(2, q) has girth 6 and diameter 4, whereas D(3, q) has girth 8 and diameter 6. We prove that for an odd prime p, LU(3, p) is a [p 3 , k]code with k ≥ (p 3 − 2p 2 + 3p − 2)/2. We show that the minimum weight and the weight of the minimum stopping set of LU(3, q) are at least 2q and they are exactly 2q for many LU(3, q) codes. We find some interesting LDPC codes by our partial row construction.
Overlapped message passing for quasicyclic lowdensity parity check codes
 IEEE Trans. Circuits Syst. I, Reg. Papers
, 2004
"... Abstract—In this paper, a systematic approach is proposed to develop a high throughput decoder for quasicyclic lowdensity parity check (LDPC) codes, whose parity check matrix is constructed by circularly shifted identity matrices. Based on the properties of quasicyclic LDPC codes, the two stages ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
(Show Context)
Abstract—In this paper, a systematic approach is proposed to develop a high throughput decoder for quasicyclic lowdensity parity check (LDPC) codes, whose parity check matrix is constructed by circularly shifted identity matrices. Based on the properties of quasicyclic LDPC codes, the two stages of belief propagation decoding algorithm, namely, check node update and variable node update, could be overlapped and thus the overall decoding latency is reduced. To avoid the memory access conflict, the maximum concurrency of the two stages is explored by a novel scheduling algorithm. Consequently, the decoding throughput could be increased by about twice assuming dualport memory is available. Index Terms—High throughput, lowdensity parity check (LDPC) codes, overlapped message passing (MP), quasicyclic codes. I.
On the computation of the minimum distance of lowdensity paritycheck codes
 In IEEE International Conference on Communications
, 2004
"... Abstract — Lowdensity paritycheck (LDPC) codes in their broadersense definition are linear codes whose paritycheck matrices have fewer 1s than 0s. Finding their minimum distance is therefore in general an NPhard problem; in other words there exists no known polynomial deterministic algorithm to ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
Abstract — Lowdensity paritycheck (LDPC) codes in their broadersense definition are linear codes whose paritycheck matrices have fewer 1s than 0s. Finding their minimum distance is therefore in general an NPhard problem; in other words there exists no known polynomial deterministic algorithm to compute the minimum distance of a particular, nontrivial LDPC code. We propose a randomized algorithm called the approximately nearest codewords (ANC) searching approach to attack this hard problem for iteratively decodable LDPC codes. The principle of the ANC searching approach is to search codewords locally around the allzero codeword perturbed by a minimum level of noise, anticipating that the resultant nearest nonzero codewords will most likely contain the minimumHammingweight codeword whose Hamming weight is equal to the minimum distance of the linear code. The effectiveness of the algorithm is demonstrated by numerous examples. minimum distance, LDPC codes, algorithm, NPhardness
A coding theorem for lossy data compression by LDPC codes
 IEEE Trans. Info. Theory
, 2003
"... © 2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
© 2003 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Design of LDPC Decoders for Improved Low Error Rate Performance: Quantization and Algorithm Choices
, 2009
"... Many classes of highperformance lowdensity paritycheck (LDPC) codes are based on parity check matrices composed of permutation submatrices. We describe the design of a parallelserial decoder architecture that can be used to map any LDPC code with such a structure to a hardware emulation platfor ..."
Abstract

Cited by 26 (10 self)
 Add to MetaCart
(Show Context)
Many classes of highperformance lowdensity paritycheck (LDPC) codes are based on parity check matrices composed of permutation submatrices. We describe the design of a parallelserial decoder architecture that can be used to map any LDPC code with such a structure to a hardware emulation platform. Highthroughput emulation allows for the exploration of the low biterror rate (BER) region and provides statistics of the error traces, which illuminate the causes of the error floors of the (2048, 1723) ReedSolomon based LDPC (RSLDPC) code and the (2209, 1978) arraybased LDPC code. Two classes of error events are observed: oscillatory behavior and convergence to a class of noncodewords, termed absorbing sets. The influence of absorbing sets can be exacerbated by message quantization and decoder implementation. In particular, quantization and the logtanh function approximation in sumproduct decoders