### Table 3. Mutual information estimates of characteristic frequencies larger than the bandwidth of single spikes. Data Residuum

"... In PAGE 10: ... The extent of mutual infor- mation up to 15 MHz is compatible with a `foot apos; of low ux sometimes observed in spectra of spikes. The peaks in mutual information occur at di erent fre- quency separations in the four events ( Table3 ). This agrees with the maximum entropy method (Table 2).... ..."

### Table 1: Results showing the mutual information, high-band entropies, and the ratio between the mutual information and the high- band entropies for di erent sound classes and di erent high- band parameterizations.

"... In PAGE 74: ... Each pair of sentences were played twice and in random order to increase the statistical signi cance of the test. The results from the listening test are displayed in Table1 and show that our system is rated between equal and slightly worse compared to the original. CRoS Q.... In PAGE 74: ...94 +/- 0.12 Table1 : Mean scores together with 95% con dence intervals of the CCR listening test when comparing CRoS, Q.25, and Q.... In PAGE 92: ...6 4000 /uw/ 5 4.7 20000 Table1 : Estimated manifold dimension, di erential entropy (in bits), and the number of observations available for the di erent vowel classes. the linear prediction analysis a pre-emphasis lter ( nite impulse response lter with transfer function F(z) = 1 0:97z 1) was applied to the down- sampled signal.... In PAGE 92: ... For the experiment we used K = 10 random codebooks ranging in size from M1 = 2blog2(N 1) 2c to M10 = N 1, where b c denotes the rounding downwards to the nearest integer and N is the maximum number of available observations for each vowel class. From the results displayed in Table1 we observe that the dimensionality of the space varies, ranging from ve to seven. Thus, the maximum number of parameters needed to describe a vowel is according to our experiments seven if the non-linear approximately deterministic dependencies of the data are known.... In PAGE 93: ... The envelopes of the vowel spectra usually show three clear resonances (so-called formants) in the frequency range 300-3400Hz, and have a negative spectral tilt. The formants can be speci ed by their location in frequency together with their corresponding bandwidths, thus the number of degrees of freedom is similar to the estimates of intrinsic dimensionality shown in Table1 . However, the exact mechanism underlying this behavior is outside of the scope of this paper.... In PAGE 119: ... Finally, the estimate of the lower bound on the prediction error Pe is obtained using (2) with the estimate of H(Y jX). The procedure is summarized in Table1 . Note that the procedure does not a ect the error-rate estimates negatively when the class-conditional fea- ture spaces show no intrinsic dimensionality, since no noise would be added.... In PAGE 119: ... Finally, estimate the lower bound on the prediction error Pe using (2) and (3). Table1 : Estimation of error probability. 5 Experiments and results In this section we present three experiments.... In PAGE 120: ... In the second and third experiments we estimate the classi cation error probability for an arti cial and a real-world case, respectively. We use the estimation procedure outlined in Table1 for this purpose. The experiments show the importance of constraining the resolution of the class-conditional feature spaces when the feature spaces have an intrinsic dimensionality that is di erent from the extrinsic dimensionality (i.... In PAGE 122: ... space, we need to constrain the resolution of the space before estimating the mutual information between the features and the class labels. As discussed in Section 3, it is desirable to constrain the resolution as little as possible, and we use the automatic estimation procedure outlined in Table1 for this purpose. For the experiment we used the following con guration: number of ob- servations Ly = 20000, number of subsets M = 5, logarithm of the minimum number of observations for a class log2(Ly) = log2( Ly) 1, slope thresh- old = 0:1.... In PAGE 122: ... The level of noise was controlled by varying (starting from 0 and then incremented by 0.05 at step 5 in Table1 ). The uniformly distributed noise type simpli es the calculations of the true (constrained resolution) mutual information, useful for evaluating the accuracy of the estimated mutual information.... In PAGE 137: ... By selecting the quantization step-size in this manner, the entropy is equal to the di erential entropy for the high-band represented by the LER. The ratio between the mutual information and the entropy in the high band is presented in Table1 , showing in percentage the decrease in uncer- tainty of the high-band when we observe the narrow-band spectral envelope. The results show that the ratio between the mutual information and the perceived entropy of the high-band is fairly low regardless of sound class.... ..."

### Table 2: The standard error of the mutual information estimator SE nbI fx; ygo

### TABLE II MUTUAL INFORMATION ESTIMATES FOR CONTOURLET AND WAVELET REPRESENTATIONS OF THE LENA IMAGE USING DIFFERENT FILTERS.

### TABLE IV AVERAGE MUTUAL INFORMATION ESTIMATES WITH A SINGLE PARENT, NEIGHBOR, AND COUSIN.

### Table 1: Table of kernel dependence functionals. Columns show whether the functional is covariance or correlation based, and rows indicate whether the dependence measure is the maximum singular value of the covariance/correlation operator, or a bound on the mutual information.

2005

"... In PAGE 30: ...1 Conclusions We have introduced two novel functionals to measure independence: the constrained covariance (COCO), which is the spectral norm of the covariance operator between reproducing kernel Hilbert spaces, and the kernel mutual information (KMI), which is a function of the entire spectrum of the empirical estimate of this covariance operator. The first quantity is analogous to the kernel canonical correlation (KCC), which is the spectral norm of the correlation operator; the second is analogous to the kernel generalised variance (KGV), which is a function of the empirical correlation operator spectrum (see Table1 in the introduction). We prove two main results.... ..."

Cited by 8

### Table 8: llesults with mutual information

2000

Cited by 5

### Table 3: Ranked by mutual information

### Table 1. Mutual information estimate. PX, NX, CX refer to coefficients across scale, space, and directions respectively. Lena Barbara Peppers

"... In PAGE 2: ... Even after the bias is partially removed, the residual bias causes the esti- mator to underestimate the mutual information and can only serve as its lowerbound. Table1 shows the estimate results for various images. We have found that contourlet coefficients of natural images show the highest intrasubband dependencies, followed by rather high inter-... ..."