Results 1  10
of
19
Nested Linear/Lattice Codes for Structured Multiterminal Binning
, 2002
"... Network information theory promises high gains over simple pointtopoint communication techniques, at the cost of higher complexity. However, lack of structured coding schemes limited the practical application of these concepts so far. One of the basic elements of a network code is the binning sch ..."
Abstract

Cited by 352 (15 self)
 Add to MetaCart
Network information theory promises high gains over simple pointtopoint communication techniques, at the cost of higher complexity. However, lack of structured coding schemes limited the practical application of these concepts so far. One of the basic elements of a network code is the binning scheme. Wyner and other researchers proposed various forms of coset codes for efficient binning, yet these schemes were applicable only for lossless source (or noiseless channel) network coding. To extend the algebraic binning approach to lossy source (or noisy channel) network coding, recent work proposed the idea of nested codes, or more specifically, nested paritycheck codes for the binary case and nested lattices in the continuous case. These ideas connect network information theory with the rich areas of linear codes and lattice codes, and have strong potential for practical applications. We review these recent developments and explore their tight relation to concepts such as combined shaping and precoding, coding for memories with defects, and digital watermarking. We also propose a few novel applications adhering to a unified approach.
Averaging bounds for lattices and linear codes
 IEEE Trans. Information Theory
, 1997
"... Abstract — General random coding theorems for lattices are derived from the Minkowski–Hlawka theorem and their close relation to standard averaging arguments for linear codes over finite fields is pointed out. A new version of the Minkowski–Hlawka theorem itself is obtained as the limit, for p!1,ofa ..."
Abstract

Cited by 97 (1 self)
 Add to MetaCart
(Show Context)
Abstract — General random coding theorems for lattices are derived from the Minkowski–Hlawka theorem and their close relation to standard averaging arguments for linear codes over finite fields is pointed out. A new version of the Minkowski–Hlawka theorem itself is obtained as the limit, for p!1,ofasimple lemma for linear codes over GF (p) used with plevel amplitude modulation. The relation between the combinatorial packing of solid bodies and the informationtheoretic “soft packing ” with arbitrarily small, but positive, overlap is illuminated. The “softpacking” results are new. When specialized to the additive white Gaussian noise channel, they reduce to (a version of) the de Buda–Poltyrev result that spherically shaped lattice codes and adecoder that is unaware of the shaping can achieve the rate 1=2 log2 (P=N).
A Layered Lattice Coding Scheme for a Class of Three User Gaussian Interference Channels
 Allerton Conf. on Communication, Control, and Computing
, 2008
"... Abstract—The paper studies a class of three user Gaussian interference channels. A new layered lattice coding scheme is introduced as a transmission strategy. The use of lattice codes allows for an “alignment ” of the interference observed at each receiver. The layered lattice coding is shown to ach ..."
Abstract

Cited by 45 (4 self)
 Add to MetaCart
(Show Context)
Abstract—The paper studies a class of three user Gaussian interference channels. A new layered lattice coding scheme is introduced as a transmission strategy. The use of lattice codes allows for an “alignment ” of the interference observed at each receiver. The layered lattice coding is shown to achieve more than one degree of freedom for a class of interference channels and also achieves rates which are better than the rates obtained using the HanKobayashi coding scheme. I.
Soft decoding techniques for codes and lattices, including the Golay code and the Leech lattice
 IEEE Trans. Inform. Theory
, 1986
"... AbstrtiTwo kinds of a&orithms are considered. 1) ff 59 is a binary code of length n, a “soft decision ” decodhg afgorithm for Q changes ao arbitrary point of R ” into a nearest codeword (nearest in Euclideao distance). 2) Similarly, a deco&g afgorithm for a lattice A in R ” changes an arbit ..."
Abstract

Cited by 44 (3 self)
 Add to MetaCart
(Show Context)
AbstrtiTwo kinds of a&orithms are considered. 1) ff 59 is a binary code of length n, a “soft decision ” decodhg afgorithm for Q changes ao arbitrary point of R ” into a nearest codeword (nearest in Euclideao distance). 2) Similarly, a deco&g afgorithm for a lattice A in R ” changes an arbitraq point of R ” into a closest lattice point. Some general methods are given for constructing such algorithnq and arc used to obtain new and faster decoding algorithms for the C&set lattice E,, the Cofay code and the Leech lattice. L I.
Lattice Codes can achieve Capacity on the AWGN channel
 IEEE TRANS. INFORM. THEORY
, 1998
"... ..."
Lowdensity lattice codes
 IEEE Transactions on Information Theory
, 2008
"... Abstract—Lowdensity lattice codes (LDLC) are novel lattice codes that can be decoded efficiently and approach the capacity of the additive white Gaussian noise (AWGN) channel. In LDLC a codeword x is generated directly at the ndimensional Euclidean space as a linear transformation of a correspondi ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Lowdensity lattice codes (LDLC) are novel lattice codes that can be decoded efficiently and approach the capacity of the additive white Gaussian noise (AWGN) channel. In LDLC a codeword x is generated directly at the ndimensional Euclidean space as a linear transformation of a corresponding integer message vector b, i.e., x = Gb, where H = G01 is restricted to be sparse. The fact that H is sparse is utilized to develop a lineartime iterative decoding scheme which attains, as demonstrated by simulations, good error performance within 0.5 dB from capacity at block length of n =100,000 symbols. The paper also discusses convergence results and implementation considerations. Index Terms—Iterative decoding, lattice codes, lattices, lowdensity paritycheck (LDPC) code. I.
Universal Bound on the Performance of Lattice Codes
 IEEE TRANS. INFORM. THEORY
, 1999
"... We present a lower bound on the probability of symbol error for maximumlikelihood decoding of lattices and lattice codes on a Gaussian channel. The bound is tight for error probabilities and signaltonoise ratios of practical interest, as opposed to most existing bounds that become tight asymptoti ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
(Show Context)
We present a lower bound on the probability of symbol error for maximumlikelihood decoding of lattices and lattice codes on a Gaussian channel. The bound is tight for error probabilities and signaltonoise ratios of practical interest, as opposed to most existing bounds that become tight asymptotically for high signaltonoise ratios. The bound is also universal; it provides a limit on the highest possible coding gain that may be achieved, at specific symbol error probabilities, using any lattice or lattice code in n dimensions. In particular, it is shown that the effective coding gains of the densest known lattices are much lower than their nominal coding gains. The asymptotic (as n !1) behavior of the new bound is shown to coincide with the Shannon limit for Gaussian channels.
Channel coding: The road to channel capacity
 Proceedings of the IEEE
, 2007
"... ..."
(Show Context)
Low Density Lattice Codes
"... Abstract — Low density lattice codes (LDLC) are novel lattice codes that can be decoded efficiently and approach the capacity of the additive white Gaussian noise (AWGN) channel. In LDLC a codeword x is generated directly at the ndimensional Euclidean space as a linear transformation of a correspon ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
Abstract — Low density lattice codes (LDLC) are novel lattice codes that can be decoded efficiently and approach the capacity of the additive white Gaussian noise (AWGN) channel. In LDLC a codeword x is generated directly at the ndimensional Euclidean space as a linear transformation of a corresponding integer message vector b, i.e., x = Gb, where H = G −1 is restricted to be sparse. The fact that H is sparse is utilized to develop a lineartime iterative decoding scheme which attains, as demonstrated by simulations, good error performance within ∼ 0.5dB from capacity at block length of n = 100, 000 symbols. The paper also discusses convergence results and implementation considerations. I.
1 Signal Codes
, 806
"... Abstract — Motivated by signal processing, we present a new class of channel codes, called signal codes, for continuousalphabet channels. Signal codes are lattice codes whose encoding is done by convolving an integer information sequence with a fixed filter pattern. Decoding is based on the bidirect ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Abstract — Motivated by signal processing, we present a new class of channel codes, called signal codes, for continuousalphabet channels. Signal codes are lattice codes whose encoding is done by convolving an integer information sequence with a fixed filter pattern. Decoding is based on the bidirectional sequential stack decoder, which can be implemented efficiently using the heap data structure. Error analysis and simulation results indicate that signal codes can achieve low error rate at approximately 1dB from channel capacity. I.