### Table 5: Error rates on the test data together with training and classi cation times for the ORL database. Cont n-tuple indicates the compiled version of the continuous n-tuple with 8 levels of quantisation. All results quoted in the table use ve images per class for training and the remaining 5 per class for testing. The various n-tuple results and 1-nearest-neighbour classi er are the mean of 5 experiments, each one based on a di erent random partition of the data. The other quoted results were gained using various di erent experimental methodologies; hence, this table should only be taken as a rough guide.

1997

"... In PAGE 8: ...2% error rate, while retaining the recognition speed of the conventional n-tuple method, though the memory requirements are greater, with an address space for each class of 84 500 = 2048000. Table5 shows the performance of several di erent approaches on the ORL database. The probabilistic decision-based neural net (PDBNN) results are taken from Lin and Kung [10].... In PAGE 8: ... The question of the statistical signi cance of these di erence arises, and it will be interesting to explore this further, but from the results gained so far, the continuous n-tuple system appears to be at the very least competitive with the best of the other methods. The timing results in Table5 should be treated with some caution since the di erent methods have been implemented by di erent people and on di erent platforms. The timings for the various n-tuple classi er and the nearest-neighbour... ..."

Cited by 7

### Table 1: Error rates on the test data together with training and classi cation times for the ORL database. Cont n-tuple indicates the compiled version of the continuous n-tuple with 16 levels of quantisation. The n-tuple methods are su xed with the parameters (n : m : ) in parentheses. The results show the mean for each experiment together with the standard deviation, and the minimum and the maximum error in parentheses. All results quoted in the table use ve images per class for training and the remaining 5 per class for testing. The various n-tuple results and 1-nearest-neighbour (1-NN) classi er are each based on 100 experiments, each one using a di erent random partition of the data.

### Table 6 Quantisation matrix scale factors

"... In PAGE 64: ... When the last pixel of an image is written, the EOI bit should be set high. The Quant_Scale signal modifies the compression rate by dividing the quantisation matrix by a constant factor according to Table6 . The signal is registered so that changing its value does not affect the compression until the following block is being quantised.... In PAGE 67: ... The Quantisation process is, as explained in [20], just a process of dividing the coefficient matrix by a quantisation matrix (Figure 7). The quantisation matrix is also divided by some constant, which depends on the current value of quant_scale_code (see Table6 ). The Quantizer stores all values of 255 0 , 1 G100 G100 G91 G91 in a look-up table in order to avoid divisions thereby sacrificing a rather insignificant amount of area and gaining significant speed improvements.... In PAGE 73: ...umber of pixels in each beam, i.e. the length of each beam. The resulting picture will hence be of size x_linesG194y_lines in polar coordinates. The quantisation scale supplied in the application header is the number that is used for scaling the JPEG quantisation matrix according to Table6 . The number of bearings supplied is restricted to 10, in order to limit the overhead.... ..."

### Table 1. Specification of the AFs and their respective quantised values.

2006

"... In PAGE 2: ... 2.2 Articulatory features In this research, we used the set of seven articulatory features shown in Table1 . The names of the AFs are self-explanatory, except maybe for static, which gives an indication of the rate of acoustic change, e.... ..."

Cited by 3

### Table 1: Bit allocation tables for the quantisers

1997

Cited by 1

### Table 1: Quantisation performances: Eq / q

### Table 1 Quantisation performances: Eq=ESCq

"... In PAGE 9: ...Performance of diVTerent methods is compared in Table1 . From these results we can see that ESOM achieved the best results for natural images (Mandrill and Lenna).... ..."

### Table 1. Summary of statistics for different tests. (QL = number of quantisation levels)

"... In PAGE 10: ...igure 15. Quantised responses of System 4 and of its model obtained by MLP. (number of quantisation levels = 15; number of attributes = 3). Table1 summarises the statistics for the tests on the four systems. Table 2 gives a comparison of the accuracies of models obtained using RULES-3 and MLPs.... ..."

### Table 1. Constraints and initial conditions for the quantiser generating function.

2000

"... In PAGE 8: ...2) for r; s; t =1:::8: The parameter space now consists of only the seven real parameters of this function. Table1 lists the initial values of the parameters and their allowable ranges. The initial parameter values were set in order that the previously known best quantisation strategy, the xed quantisation array, was generated, where q0 is the quantiser scale which produces the array leading to an output bit rate which is nearest to but less than the desired bit rate for which the optimisation process is to be run.... ..."

Cited by 1

### Table 1. Constraints and initial conditions for the quantiser generating function.

2000

"... In PAGE 8: ...or r, s, t = 1. . . 8. The parameter space now consists of only the seven real parameters of this function. Table1 lists the initial values of the parameters and their allowable ranges. The initial parameter values were set in order that the previously known best quantisation strategy, the fixed quantisation array, was generated, where q0 is the quantiser scale which produces the array leading to an output bit rate which is nearest to but less than the desired bit rate for which the optimisation process is to be run.... ..."

Cited by 1