### Table 2: Mapping between information weight and codeword weight for two equivalent 8- state encoders.

"... In PAGE 13: ... (Clearly, if the degree of u(D) is large, the codeword y(D) will have high weight.) Table2 shows the mapping between information weight and codeword weight for these two encoders for all codewords y(D) of weight 9 or less. In this case we note that, for all reversible6 polynomials u(D), the mapping between information weight and codeword weight is the same for the two encoders.... ..."

### Table 3.4 Sablefish - Hired Skipper Information. Weights are in thousands of pounds. Sablefish pounds are expressed in round weight. Source: NMFS 2003a.

2004

### Table 4.1. The IRWEF for the code (21,9), i.e. parity weight for different information weight patterns for the binary systematic RS(7,3).

2001

Cited by 2

### Table 3.3 Halibut - Hired Skipper Information. Weights are in thousands of pounds. Halibut pounds are expressed in net (headed and gutted) weight. Source: NMFS 2003a.

2004

### Table 7 F-ratios and Correlation values for individual N-grams and the overall NIST score given different information weighting sources. Values are for commercial translation systems for the 2001 Chinese-to-English corpus. Eight reference translations were used to compute these statistics.

"... In PAGE 7: ... To see if more accurate estimates of likelihoods might improve score performance, an auxiliary database comprising the entire English language subset of both the TDT2 and TDT3 corpora9 was used to estimate N-gram likelihoods. Table7 show 9 http://www.... ..."

### TABLE I Error Event Probability Estimates for the memory 4 Convolutional Code. K = 2, L = 2, = -10 dB. A Total of 1,000 and 100,000 Simulation Runs per Error Event Were Considered With IS and MC, respectively. W = Information Weight. D = Hamming Weight.

### Table 3: Error Event Probability Estimates for the memory 4 Convolutional Code. K = 2, L = 2, = -10 dB. A Total of 1,000 and 100,000 Simulation Runs per Error Event Were Considered With Importance Sampling and Monte Carlo, respectively. W = Information Weight. D = Hamming Weight.

"... In PAGE 13: ...stimates for a given number of simulation runs. The results listed in Fig. 7 are again summarized in Table 2. The power and accuracy of our simulation techniques is further illustrated in Table3 which compares conventional MC and IS results for a memory 4 and Rate 1/2 convolutional code with K = 2, L = 2, and = ?10 dB. This table lists the rst error event probabilities for the rst nine most signi cant error events.... ..."

### Table 2 shows the perplexity results for ICSI meet- ings for the different methods. While the linear interpo- lation (LIN) and the similarity modulated n-gram (SIM- MOD) do not bring any improvements over the baseline trigram model, the information weighted geometric mean (INFG) reduces perplexity. The improvement of the infor- mation weighted geometric mean interpolation over the tri- gram model is consistent with findings in [3]. For the other interpolations we always used the INFG method, since it outperformed all other interpolation methods.

"... In PAGE 3: ... Table2 . Perplexity results for ICSI meetings on dev02.... ..."

### Table 6. Consensus Wrappers (Initialisation)

2007

"... In PAGE 7: ...Px i,r+1 tolerant system where any interactions with observers occur through wrapper code re- siding at the immortal location star; this allows us to decompose our proof into the basic correctness and correctness preservation phases, as in Table 1(c) and (d). Table6 defines the wrapper code which, when put in parallel with C of Table 5, pro- vides separate testing scenarios for the algorithm. We have two forms of initialization code: Igen arbitrarily initialises every participant to either true or false after the action start whereas Itrue and Ifalse initialise all participants to just true, or just false respectively.... ..."

Cited by 1

### Table 2: HO and CHO Consensus algorithms and correctness conditions

"... In PAGE 32: ... 5.7 Summary Table2 summarizes the various HO and CHO Consensus algorithms described in this sec- tion, with their correctness requirements.... ..."