Results 1 - 10
of
307
Design of capacity-approaching irregular low-density parity-check codes
- IEEE TRANS. INFORM. THEORY
, 2001
"... We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on [1]. Assuming that the unde ..."
Abstract
-
Cited by 588 (6 self)
- Add to MetaCart
We design low-density parity-check (LDPC) codes that perform at rates extremely close to the Shannon capacity. The codes are built from highly irregular bipartite graphs with carefully chosen degree patterns on both sides. Our theoretical analysis of the codes is based on [1]. Assuming that the underlying communication channel is symmetric, we prove that the probability densities at the message nodes of the graph possess a certain symmetry. Using this symmetry property we then show that, under the assumption of no cycles, the message densities always converge as the number of iterations tends to infinity. Furthermore, we prove a stability condition which implies an upper bound on the fraction of errors that a belief-propagation decoder can correct when applied to a code induced from a bipartite graph with a given degree distribution. Our codes are found by optimizing the degree structure of the underlying graphs. We develop several strategies to perform this optimization. We also present some simulation results for the codes found which show that the performance of the codes is very close to the asymptotic theoretical bounds.
Analysis of sum-product decoding of low-density parity-check codes using a Gaussian approximation
- IEEE TRANS. INFORM. THEORY
, 2001
"... Density evolution is an algorithm for computing the capacity of low-density parity-check (LDPC) codes under messagepassing decoding. For memoryless binary-input continuous-output additive white Gaussian noise (AWGN) channels and sum-product decoders, we use a Gaussian approximation for message densi ..."
Abstract
-
Cited by 244 (2 self)
- Add to MetaCart
(Show Context)
Density evolution is an algorithm for computing the capacity of low-density parity-check (LDPC) codes under messagepassing decoding. For memoryless binary-input continuous-output additive white Gaussian noise (AWGN) channels and sum-product decoders, we use a Gaussian approximation for message densities under density evolution to simplify the analysis of the decoding algorithm. We convert the infinite-dimensional problem of iteratively calculating message densities, which is needed to find the exact threshold, to a one-dimensional problem of updating means of Gaussian densities. This simplification not only allows us to calculate the threshold quickly and to understand the behavior of the decoder better, but also makes it easier to design good irregular LDPC codes for AWGN channels. For various regular LDPC codes we have examined, thresholds can be estimated within 0.1 dB of the exact value. For rates between 0.5 and 0.9, codes designed using the Gaussian approximation perform within 0.02 dB of the best performing codes found so far by using density evolution when the maximum variable degree is IH. We show that by using the Gaussian approximation, we can visualize the sum-product decoding algorithm. We also show that the optimization of degree distributions can be understood and done graphically using the visualization.
Distributed source coding for sensor networks
- In IEEE Signal Processing Magazine
, 2004
"... n recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf pre-vious milestones in the information revolution. MIT Technology Review ranked wireless sensor networks that con-sist of many tiny, low- ..."
Abstract
-
Cited by 224 (4 self)
- Add to MetaCart
n recent years, sensor research has been undergoing a quiet revolution, promising to have a significant impact throughout society that could quite possibly dwarf pre-vious milestones in the information revolution. MIT Technology Review ranked wireless sensor networks that con-sist of many tiny, low-power and cheap wireless sensors as the number one emerging technology. Unlike PCs or the Internet, which are designed to support all types of applications, sensor networks are usually mission driven and application specific (be it detection of biological agents and toxic chemicals; environmental measure-ment of temperature, pressure and vibration; or real-time area video surveillance). Thus they must operate under a set of unique constraints and requirements. For example, in contrast to many other wireless devices (e.g., cellular phones, PDAs, and laptops), in which energy can be recharged from time to time, the energy provisioned for a wireless sensor node is not expected to be renewed throughout its mission. The limited amount of energy available to wireless sensors has a significant impact on all aspects of a wireless sensor network, from the amount of information that the node can process, to the volume of wireless communication it can carry across large distances. Realizing the great promise of sensor networks requires more than a mere advance in individual technologies; it relies on many com-ponents working together in an efficient, unattended, comprehensible, and trustworthy manner. One of the enabling technologies for sensor networks is distributed source coding (DSC), which refers to the compression of multiple correlated sensor out-puts [1]–[4] that do not communicate with each other (hence distributed coding). These sensors send their compressed outputs to a central point [e.g., the base station (BS)] for joint decoding. I
Compression of binary sources with side information using low-density parity-check codes
- in Proc. Global Telecommunications Conf
, 2002
"... Abstract—We show how low-density parity-check (LDPC) codes can be used to compress close to the Slepian–Wolf limit for cor-related binary sources. Focusing on the asymmetric case of com-pression of an equiprobable memoryless binary source with side information at the decoder, the approach is based o ..."
Abstract
-
Cited by 209 (6 self)
- Add to MetaCart
(Show Context)
Abstract—We show how low-density parity-check (LDPC) codes can be used to compress close to the Slepian–Wolf limit for cor-related binary sources. Focusing on the asymmetric case of com-pression of an equiprobable memoryless binary source with side information at the decoder, the approach is based on viewing the correlation as a channel and applying the syndrome concept. The encoding and decoding procedures are explained in detail. The per-formance achieved is seen to be better than recently published re-sults using turbo codes and very close to the Slepian–Wolf limit. Index Terms—Channel coding, distributed source coding, LDPC codes, Slepian–Wolf theorem. I.
Regular and Irregular Progressive Edge-Growth Tanner Graphs
- IEEE TRANS. INFORM. THEORY
, 2003
"... We propose a general method for constructing Tanner graphs having a large girth by progressively establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) construction. Lower bounds on the girth of PEG Tanner graphs and on the mi ..."
Abstract
-
Cited by 193 (0 self)
- Add to MetaCart
We propose a general method for constructing Tanner graphs having a large girth by progressively establishing edges or connections between symbol and check nodes in an edge-by-edge manner, called progressive edge-growth (PEG) construction. Lower bounds on the girth of PEG Tanner graphs and on the minimum distance of the resulting low-density parity-check (LDPC) codes are derived in terms of parameters of the graphs. The PEG construction attains essentially the same girth as Gallager's explicit construction for regular graphs, both of which meet or exceed the Erdos--Sachs bound. Asymptotic analysis of a relaxed version of the PEG construction is presented. We describe an empirical approach using a variant of the "downhill simplex" search algorithm to design irregular PEG graphs for short codes with fewer than a thousand of bits, complementing the design approach of "density evolution" for larger codes. Encoding of LDPC codes based on the PEG construction is also investigated. We show how to exploit the PEG principle to obtain LDPC codes that allow linear time encoding. We also investigate regular and irregular LDPC codes using PEG Tanner graphs but allowing the symbol nodes to take values over GF(q), q > 2. Analysis and simulation demonstrate that one can obtain better performance with increasing field size, which contrasts with previous observations.
Using linear programming to decode binary linear codes
- IEEE TRANS. INFORM. THEORY
, 2005
"... A new method is given for performing approximate maximum-likelihood (ML) decoding of an arbitrary binary linear code based on observations received from any discrete memoryless symmetric channel. The decoding algorithm is based on a linear programming (LP) relaxation that is defined by a factor grap ..."
Abstract
-
Cited by 183 (9 self)
- Add to MetaCart
(Show Context)
A new method is given for performing approximate maximum-likelihood (ML) decoding of an arbitrary binary linear code based on observations received from any discrete memoryless symmetric channel. The decoding algorithm is based on a linear programming (LP) relaxation that is defined by a factor graph or parity-check representation of the code. The resulting “LP decoder” generalizes our previous work on turbo-like codes. A precise combinatorial characterization of when the LP decoder succeeds is provided, based on pseudocodewords associated with the factor graph. Our definition of a pseudocodeword unifies other such notions known for iterative algorithms, including “stopping sets, ” “irreducible closed walks, ” “trellis cycles, ” “deviation sets, ” and “graph covers.” The fractional distance ��— ™ of a code is introduced, which is a lower bound on the classical distance. It is shown that the efficient LP decoder will correct up to ��— ™ P I errors and that there are codes with ��— ™ a @ I A. An efficient algorithm to compute the fractional distance is presented. Experimental evidence shows a similar performance on low-density parity-check (LDPC) codes between LP decoding and the min-sum and sum-product algorithms. Methods for tightening the LP relaxation to improve performance are also provided.
Decoding Error-Correcting Codes via Linear Programming
, 2003
"... Error-correcting codes are fundamental tools used to transmit digital information over unreliable channels. Their study goes back to the work of Hamming [Ham50] and Shannon [Sha48], who used them as the basis for the field of information theory. The problem of decoding the original information up ..."
Abstract
-
Cited by 116 (5 self)
- Add to MetaCart
Error-correcting codes are fundamental tools used to transmit digital information over unreliable channels. Their study goes back to the work of Hamming [Ham50] and Shannon [Sha48], who used them as the basis for the field of information theory. The problem of decoding the original information up to the full error-correcting potential of the system is often very complex, especially for modern codes that approach the theoretical limits of the communication channel. In this thesis we investigate the application of linear programming (LP) relaxation to the problem of decoding an error-correcting code. Linear programming relaxation is a standard technique in approximation algorithms and operations research, and is central to the study of efficient algorithms to find good (albeit suboptimal) solutions to very difficult optimization problems. Our new “LP decoders” have tight combinatorial characterizations of decoding success that can be used to analyze error-correcting performance. Furthermore, LP decoders have the desirable (and rare) property that whenever they output a result, it is guaranteed to be the optimal result: the most likely (ML) information sent over the
High-Throughput LDPC decoders
- IEEE Trans. on Very Large Scale Integration Systems
, 2003
"... Abstract—A high-throughput memory-efficient decoder architecture for low-density parity-check (LDPC) codes is proposed based on a novel turbo decoding algorithm. The architecture benefits from various optimizations performed at three levels of abstraction in system design—namely LDPC code design, de ..."
Abstract
-
Cited by 107 (1 self)
- Add to MetaCart
(Show Context)
Abstract—A high-throughput memory-efficient decoder architecture for low-density parity-check (LDPC) codes is proposed based on a novel turbo decoding algorithm. The architecture benefits from various optimizations performed at three levels of abstraction in system design—namely LDPC code design, decoding algorithm, and decoder architecture. First, the interconnect complexity problem of current decoder implementations is mitigated by designing architecture-aware LDPC codes having embedded structural regularity features that result in a regular and scalable message-transport network with reduced control overhead. Second, the memory overhead problem in current day decoders is reduced by more than 75 % by employing a new turbo decoding algorithm for LDPC codes that removes the multiple checkto-bit message update bottleneck of the current algorithm. A new merged-schedule merge-passing algorithm is also proposed that reduces the memory overhead of the current algorithm for low to moderate-throughput decoders. Moreover, a parallel soft-input–soft-output (SISO) message update mechanism is proposed that implements the recursions of the Balh–Cocke–Jelinek–Raviv (BCJR) algorithm in terms of simple “max-quartet ” operations that do not require lookup-tables and incur negligible loss in performance compared to the ideal case. Finally, an efficient programmable architecture coupled with a scalable and dynamic transport network for storing and routing messages is proposed, and a full-decoder architecture is presented. Simulations demonstrate that the proposed architecture attains a throughput of 1.92 Gb/s for a frame length of 2304 bits, and achieves savings of 89.13 % and 69.83 % in power consumption and silicon area over state-of-the-art, with a reduction of 60.5 % in interconnect length. Index Terms—Low-density parity-check (LDPC) codes, Ramanujan graphs, soft-input soft-output (SISO) decoder, turbo decoding algorithm, VLSI decoder architectures. I.
Iterative turbo decoder analysis based on density evolution
- IEEE J. Select. Areas Commun
, 2001
"... We track the density of extrinsic information in iterative turbo decoders by actual ..."
Abstract
-
Cited by 99 (3 self)
- Add to MetaCart
(Show Context)
We track the density of extrinsic information in iterative turbo decoders by actual
LDPC block and convolutional codes based on circulant matrices
- IEEE TRANS. INFORM. THEORY
, 2004
"... A class of algebraically structured quasi-cyclic (QC) low-density parity-check (LDPC) codes and their convolutional counterparts is presented. The QC codes are described by sparse parity-check matrices comprised of blocks of circulant matrices. The sparse parity-check representation allows for prac ..."
Abstract
-
Cited by 93 (8 self)
- Add to MetaCart
(Show Context)
A class of algebraically structured quasi-cyclic (QC) low-density parity-check (LDPC) codes and their convolutional counterparts is presented. The QC codes are described by sparse parity-check matrices comprised of blocks of circulant matrices. The sparse parity-check representation allows for practical graph-based iterative message-passing decoding. Based on the algebraic structure, bounds on the girth and minimum distance of the codes are found, and several possible encoding techniques are described. The performance of the QC LDPC block codes compares favorably with that of randomly constructed LDPC codes for short to moderate block lengths. The performance of the LDPC convolutional codes is superior to that of the QC codes on which they are based; this performance is the limiting performance obtained by increasing the circulant size of the base QC code. Finally, a continuous decoding procedure for the LDPC convolutional codes is described.