#### DMCA

## The impact of constellation cardinality on Gaussian channel capacity (2010)

Venue: | in Proc. Allerton Conf. Commun., Control and Comp |

Citations: | 15 - 4 self |

### Citations

10533 |
A mathematical theory of communication
- Shannon
- 1948
(Show Context)
Citation Context ...mized by E [ X4 ] = 3, the fourth moment of the standard Gaussian. For m = 3, the 3-point Gauss quadrature has the same first five moments and achieves a capacity gap of Θ(snr6). Remark 2. As Shannon =-=[19]-=- observed, BPSK (antipodal signaling) achieves capacity in the low-SNR regime. Theorem 7 provides a finer quantitative justification of its optimality, in the sense that BPSK coincides with the 2-poin... |

1500 | Handbook of mathematical functions with formulas, graphs, and mathematical tables, volume 55 - Abramowitz, Stegun - 1964 |

538 |
Channel coding with multilevel/phase signals
- Ungerboeck
- 1982
(Show Context)
Citation Context ...on in both low and high-SNR regimes when the constellation size is fixed. The following result shows that the optimal constellation in the high-SNR limit is the equilattice, proposed by Ungerboeck in =-=[16]-=- and analyzed subsequently by Ozarow and Wyner [17] in terms of mutual information. We establish its optimality in the sense that it achieves the high-SNR finite-constellation capacity. Theorem 6. For... |

478 |
Universal Approximation Bounds for Superpositions of a Sigmoidal Function
- Barron
- 1993
(Show Context)
Citation Context ...is result applies in particular to Gaussian mixtures. The order of approximation and constructive algorithms are studied in approximation theory, neural network and statistics community, for example, =-=[3]-=-, [4], [5], [6], [7], etc. Barron [3] studied approximation by location and scale mixture of sigmoidal functions and showed that the worst case error of approximating a class of functions on Rd by m-t... |

351 |
A weak convergence approach to the theory of large deviations
- Dupuis, Ellis
- 1997
(Show Context)
Citation Context ...he next result establishes the asymptotic normality of the capacity-achieving input distribution as the constellation size grows. The proof hinges on the weak lower semicontinuity of relative entropy =-=[15]-=-. Theorem 5. For fixed snr > 0, lim m→∞Cm(snr) = 1 2 log(1 + snr). (22) Moreover, as m→∞, the optimal input distribution P ∗m,snr → N (0, 1) weakly. IV. LOW AND HIGH-SNR ASYMPTOTICS OF FINITE-CONSTELL... |

282 | Mutual information and minimum mean-square error in gaussian channels
- Guo, Shamai, et al.
(Show Context)
Citation Context ...Theorem 4. (m, snr) 7→ Cm(snr) is increasing in each argument when the other argument is fixed, upper bounded by Cm(snr) ≤ min { logm, 1 2 log(1 + snr) } . (20) As a result of the I-MMSE relationship =-=[14]-=-, the monotonicity of snr 7→ Cm(snr) is strict, because for any nondeterministic X , dI(X, snr) dsnr = 1 2 mmse(X|√snrX +N) > 0. (21) We conjecture that m 7→ Cm(snr) is also strictly increasing. The n... |

150 | On choosing and bounding probability metrics
- Gibbs, Su
- 2002
(Show Context)
Citation Context ...ap to the capacity for all SNR (see Section V). V. LOWER BOUNDS ON FINITE-CONSTELLATION CAPACITY In order to bound the relative entropy, we define the following distances between probability measures =-=[20]-=-: • The Hellinger distance between P and Q is H(P,Q) = √∫ ( √ dP − √ dQ)2. (31) • The χ2-distance between P and Q is χ2(P,Q) = ∫ ( dP dQ − 1 )2 dQ. (32) • The total variation distance between P and Q ... |

146 |
Modulation and coding for linear Gaussian channels
- Forney, Ungerboeck
- 1998
(Show Context)
Citation Context ... given by lim snr→∞ limm→∞C(snr)− I(Um; snr) = D(U || N (0, 1)) = 1 2 log pie 6 ≈ 0.25 bits, (90) Therefore at high SNR, the effective SNR loss due to using equilattice is pie6 ≈ 1.53 dB (e.g., [17], =-=[24]-=-). Because of their asymptotic normality, for the constellation B, C and D, the capacity gap vanishes as m→∞. However, except for the Gauss quadrature, the other constellations achieve only polynomial... |

63 |
Orthogonal polynomials, 4th ed
- Szegő
- 1975
(Show Context)
Citation Context ...ed as follows: 1The sequence {Hm} are called the probabilists’ Hermite polynomials, to avoid confusion with the orthogonal polynomials weighted by e−x 2 , known as the physicists’ Hermite polynomials =-=[10]-=-. 2 ææ æ æ æ æ æ -4 -2 2 4 -0.4 -0.2 0.2 0.4 Fig. 1. The seven-point Gauss quadrature and scaled version of H7. • The peak amplitude ∥∥XQm∥∥∞ is given by the largest root of Hm, which satisfies 2 √ m ... |

61 |
The information capacity of amplitude and varianceconstrained scalar Gaussian channels
- Smith
- 1971
(Show Context)
Citation Context ... Uniqueness and symmetry of the optimal input distribution. • How does the peak power of the optimal input scales with constellation size? This question is closely related to Smith’s classical result =-=[27]-=-, which states that the optimal input distribution for AWGN channel with both amplitude and average power constraint is finitely supported. However, little is known about the cardinality of the suppor... |

57 |
On the dimension and entropy of probability distributions
- Rényi
- 1959
(Show Context)
Citation Context ...√ m + o ( 1√ m ) [10, Theorem 6.32, p. 131]. • Because XQm can be understood as an m-point fine quantization of a Gaussian random variable, its entropy grows according to H(XQm) = 1 2 logm(1 + o (1)) =-=[13]-=-. III. PROPERTIES OF FINITE-CONSTELLATION CAPACITY A. Existence of capacity-achieving input distribution Denote by M be the collection of all probability measures on (R,B). Let Mm = {P ∈M : E [X] = 0,... |

57 |
Constructing cubature formulae: the science behind the art
- Cools
- 1997
(Show Context)
Citation Context ...gher-dimension is still open. For instance, optimal constructions for the two-dimensional Gaussian weight are unknown for more than 20 points. Even the asymptotics of N∗(m) when m is large is unknown =-=[25]-=-, [26]. Thus finding tight bounds on the optimal exponent appear to be challenging in the complex field. Another interesting direction is the (coherent) fading channel. If the channel gain is known on... |

50 | On approximate approximations using Gaussian kernels
- Maz’ya, Schmidt
- 1996
(Show Context)
Citation Context ...applies in particular to Gaussian mixtures. The order of approximation and constructive algorithms are studied in approximation theory, neural network and statistics community, for example, [3], [4], =-=[5]-=-, [6], [7], etc. Barron [3] studied approximation by location and scale mixture of sigmoidal functions and showed that the worst case error of approximating a class of functions on Rd by m-term mixtur... |

44 | Estimation in Gaussian Noise: Properties of the Minimum Mean-square Error
- Guo, Wu, et al.
- 2011
(Show Context)
Citation Context ... to the mpoint Gauss quadrature defined in Theorem 2. Proof sketch: Step 1. Using the I-MMSE relationship, it can be shown that I(X, ·) defined in (2) is smooth on R+ if and only of X has all moments =-=[18]-=-, which, in particular, holds for discrete random variable with finite support. This allows us to write I(X, snr) as the Taylor expansion at snr = 0 up to arbitrarily high order. Step 2. Prove that th... |

43 | Characterization and computation of optimal distributions for channel coding
- Huang, Meyn
- 2005
(Show Context)
Citation Context ...tically as the peak constraint grows. • Finding the optimal input support under finiteconstellation constraint is a challenging problem because of its non-convexity. On a related note, Huang and Meyn =-=[28]-=- proposed an iterative algorithm to find the optimal input distribution for AWGN channel with both amplitude and average power constraint. It might be possible to apply their cutting-plane method to f... |

30 | Rates of convergence for the Gaussian mixture sieve
- Genovese, Wasserman
(Show Context)
Citation Context ... particular to Gaussian mixtures. The order of approximation and constructive algorithms are studied in approximation theory, neural network and statistics community, for example, [3], [4], [5], [6], =-=[7]-=-, etc. Barron [3] studied approximation by location and scale mixture of sigmoidal functions and showed that the worst case error of approximating a class of functions on Rd by m-term mixtures is O ( ... |

18 |
der Vaart. Entropies and rates of convergence for maximum likelihood and Bayes estimation for mixtures of normal densities. The Annals of Statistics
- Ghosal, Van
- 2001
(Show Context)
Citation Context ...es in particular to Gaussian mixtures. The order of approximation and constructive algorithms are studied in approximation theory, neural network and statistics community, for example, [3], [4], [5], =-=[6]-=-, [7], etc. Barron [3] studied approximation by location and scale mixture of sigmoidal functions and showed that the worst case error of approximating a class of functions on Rd by m-term mixtures is... |

17 | Introduction to Numerical Analysis (3rd ed - Stoer, Bulirsch - 2002 |

14 |
Tilborg, “Approaching capacity by equiprobable signaling on the Gaussian channel
- Sun, van
- 1993
(Show Context)
Citation Context ...ined in Theorem 2, which is capacity-achieving in the low-SNR regime. C. Quantized: uniformly divide the Gaussian CDF into m segments and define an equiprobable input distribution with atoms given by =-=[23]-=- xim = E [X∞|αi,m ≤ X∞ ≤ αi+1,m] , i = 1, . . . ,m, (87) where αj,m = Φ−1((j − 1)/m), j = 1, . . . ,m+ 1. The constellation is then scaled to have unit variance. D. CLT: let {Zk} be i.i.d. equiprobabl... |

13 | Functional properties of MMSE
- Wu, Verdú
(Show Context)
Citation Context ...Cm(snr)↗ C(snr) , 12 log(1 + snr) (3) for any snr ≥ 0. The fundamental question of how fast we can approach the Gaussian channel capacity at a given SNR by increasing constellation size was raised in =-=[1]-=-. To address this question, we define the capacity gap as Dm(snr) = C(snr)− Cm(snr). (4) Note that the difference in mutual information achieved by a given input and its Gaussian counterpart can be ex... |

11 |
On the capacity of the Gaussian channel with a finite number of input levels
- Ozarow, Wyner
- 1990
(Show Context)
Citation Context ...llation size is fixed. The following result shows that the optimal constellation in the high-SNR limit is the equilattice, proposed by Ungerboeck in [16] and analyzed subsequently by Ozarow and Wyner =-=[17]-=- in terms of mutual information. We establish its optimality in the sense that it achieves the high-SNR finite-constellation capacity. Theorem 6. For fixed m ≥ 2, as snr→ 0, logm− Cm(snr) = O (√ snr ·... |

4 | Neural networks and approximation by superposition of Gaussians
- Ferreira
- 1997
(Show Context)
Citation Context ...sult applies in particular to Gaussian mixtures. The order of approximation and constructive algorithms are studied in approximation theory, neural network and statistics community, for example, [3], =-=[4]-=-, [5], [6], [7], etc. Barron [3] studied approximation by location and scale mixture of sigmoidal functions and showed that the worst case error of approximating a class of functions on Rd by m-term m... |

3 |
Shape of a distribution through the L2Wasserstein distance
- Cuesta-Albertos
- 2002
(Show Context)
Citation Context ...eme (infinity or two respectively). In terms of other weaker distances (e.g., Kolmogorov distance or Wasserstein distance), the approximation error also decays much more slowly according to Θ ( 1 m ) =-=[8]-=-. II. OPTIMAL QUADRATURE In this section we give a brief introduction to optimal quadrature in a probabilistic setup. This construction plays a key role in finite-constellation problems. Let Xm be a s... |

2 | On the convergence of sequences of moment generating functions - Kozakiewicz - 1947 |

1 |
Tauberian theorems,” Annals of mathematics
- Wiener
- 1932
(Show Context)
Citation Context ...per bounds in Theorem 1 satisfy these intuitive requirements. Moreover, we have E(snr) = Θ ( 1 snr ) in the high-SNR regime. Approximation by location mixture dates back to Wiener’s Tauberian theorem =-=[2]-=-, which states that the linear subspace spanned by translates of a given function is dense in L2(Rd) if and only if the zeros of its Fourier transform have zero Lebesgue measure. This result applies i... |

1 |
The Christoffel function for the Hermite weight is bellshaped
- Nikolov
- 2003
(Show Context)
Citation Context ...m denote the roots of Hm and wim = (m− 1)! mH2m−1(xim) . (17) Then (11) holds for all f ∈ Π2m−1. Due to the symmetry of the Hermite polynomials, XQm is also symmetric, with a bell-shaped distribution =-=[11]-=-. For each odd m, there is an atom at zero. The distribution of XQm for m ≤ 3 are given in Table I. Fig. 1 shows the sevenpoint quadrature. For higher-order quadrature formulae, see [12, Table 25.10].... |

1 |
in multidimensional integration
- “Advances
- 2002
(Show Context)
Citation Context ...imension is still open. For instance, optimal constructions for the two-dimensional Gaussian weight are unknown for more than 20 points. Even the asymptotics of N∗(m) when m is large is unknown [25], =-=[26]-=-. Thus finding tight bounds on the optimal exponent appear to be challenging in the complex field. Another interesting direction is the (coherent) fading channel. If the channel gain is known only at ... |