### Table 1: Source vector dimension (L), codebook size (N), and channel space dimension (K) for each class. The case K = 1 corresponds to the real or imaginary axis of a QAM constellation.

1996

"... In PAGE 2: ... In this work, simulated annealing is used to design the index maps as described in the next section. For the proposed system, the choices of source vec- tor dimension (L), codebook size (N), and channel space dimension (K) for each of the three classes are listed in Table1 . The QAM signal constellations were chosen with an odd number of amplitudes in each di- mension.... ..."

Cited by 4

### Table V.a The first four columns are essentially the instructions carried by the particles in the hidden variable model. Each row in this table forms a vector in an eight dimensional real space. Denote by C the convex hull of the sixteen vertices taken as vertices in this space. The dual representation of this hull is given by the intersection of a finite number of half-spaces that may be represented as linear inequalities. The above procedure can be applied to any general experimental setup that takes the form discussed in this section. In [13] and [14] Pitowsky shows that deciding whether a vector p is in this polytope is NP-Complete. However, in [4] and [16] use available linear programming packages [15] and Mathematica to solve for small cases of this problem such as GHZ.

in Submitted by

2004

### Table 1. Source vector dimension (L), codebook size (N), and channel space dimension (K) for each class. The case K = 1 corresponds to the real or imaginary axis of a QAM constellation. The side information parameters consisting of the motion vectors, the block classi cation table, and the quantizer scale factors are particularly sensitive to channel noise. Also, robust mappings are di cult to design in this case, since the neigh- bor properties for these parameters are not well

1996

"... In PAGE 2: ... In this work, simu- lated annealing was used to design the index maps as described in [5]. For the proposed system, the choices of source vector dimension (L), codebook size (N), and channel space dimension (K) for each of the three classes are listed in Table1 . The QAM signal con- stellations are chosen with an odd number of am- plitudes in each dimension.... ..."

Cited by 3

### Table 1: Sample Chromosome for Similarity Searching Vector Space Model

"... In PAGE 4: ... based on Figure 1, it starts with the generation of random weights associated with each of the 13 similarity searching and two based on probability searching for all of the chromosomes/individuals in the population. A sample chromosome/individual in the population is given in Table1 . A chromosome is a series of real numbers in the range -1.... ..."

### Table 3 The sensitivity of the nominal luminance mechanism (Vl) and the individual luminance mechanisms as measured by flicker photometrya

2000

"... In PAGE 14: ... (A1) With data obtained for N]2 test patterns, the N equations of the form of (A1) may be solved to de- termine the chromoluminance direction of SEN. Table3 provides the values of SEN we determined for each of our three observers. References Brainard, D.... ..."

Cited by 2

### Table 3 Theoretical and actual performance of various coding region seeds. BLAST-X { unspaced seeds of support X; BLAT-X-Y { unspaced seeds with al- lowed mismatches; CPH-X { spaced seeds of support X optimized in coding-aware model; CVS-X-Y { vector seeds (allowing spaced seeds and mismatches) optimized in coding-aware model; CVS2-X-Y { same as CVS, except values from f0; 1; 2g are allowed. The predicted false negative rates come from the Formula (1), where fragment lengths and numbers are inferred from real alignments.

2005

"... In PAGE 14: ... Results for some interesting seeds are shown in Table 3. We then chose the best seeds allowing mismatches, at each support/threshold combination (the seeds denoted by CVS-X-Y in Table3 ), and explored the e ect of xing the middle positions of some of their codons, by setting the corresponding position in the seed vector to two and raising the threshold by one. Results for this experiment are shown in Table 3.... In PAGE 14: ...arts of alignments. Results for some interesting seeds are shown in Table 3. We then chose the best seeds allowing mismatches, at each support/threshold combination (the seeds denoted by CVS-X-Y in Table 3), and explored the e ect of xing the middle positions of some of their codons, by setting the corresponding position in the seed vector to two and raising the threshold by one. Results for this experiment are shown in Table3 . It shows the perfor- mance of a collection of BLAST and BLAT seeds, optimized spaced seeds, and optimized vector seeds, chosen for sensitivity to this model.... In PAGE 15: ...Table3 matches fully 95% of alignments. This is comparable to the sensitivity of the BLAST-7 seed, whose false positive rate is ninety times higher.... In PAGE 15: ... One can also use shorter seeds for use with two hits, which allows for smaller hash table structures. The last seed in Table3 is especially appealing; the support for this seed is just eight, making for very small hash table, while the performance is comparable to one hit to the longer seed CVS2-13-12 in the table. 4.... ..."

Cited by 11

### Table 1: Impact of LSI vector space and GM

2005

"... In PAGE 6: ...aseline (similar to the method of (Liu et al., 2004)). We report first results using only the names of the categories as initial seeds. Table1 displays the F1 measure for the 20news- groups and Reuters data sets, with and without LSI and with and without GM. The performance figures show the incremental benefit of both LSI and GM.... ..."

Cited by 2

### Table 4 Additional Primitive Lie Algebras of Vector Fields in R2

1996

"... In PAGE 3: ... For instance, extensive symbolic algebra calculations using the realization of so(3; R) given by Case 4.2 in Table4 failed to yield any real-valued potentials, even after allowing for a possible rescaling of the wave function by a non-zero gauge factor, cf. (7).... In PAGE 15: ... However, the complexi cation (or analytic continuation) of each of these ve additional Lie algebras will, of course, be equivalent, under a complex di eomorphism, to one of the complex normal forms on our list. The complete list of these additional real forms along with their canonical complexi- cations appears in Table4 ; the required changes of variables needed to change them into the canonical form appears later in this section. Therefore, to complete the classi cation of all real Lie algebras of rst-order di erential operators, we need only determine the real cohomology spaces associated with these ve additional real forms, and, further, to determine what values of the cohomology parameters will produce quasi-exactly solvable algebras.... In PAGE 21: ... Theorem 11. Every nite-dimensional Lie algebra of rst order di erential opera- tors with vector eld part given by one of the ve additional primitive real forms listed in Table4 is a subalgebra of one of the pseudo-orthogonal Lie algebras given by (45). 6.... ..."

### Table 2.3: Comparing performance of GA with and without noise assign- ment in the search variables for di erent numbers of bit/parameter Simu- lation results are for test function 2 (averaged from 100 trials). by any chance, all the discretized search points are located in the valley of the problem space, the Binary-GA displays a severe di culty in nding a solution, as shown in the case of 7 bits. However, the noise assignment scheme provides consistent accuracy for a variety of di erent bit sizes. The Binary-GA with noise can sample more information than the conventional Binary-GA. FP-GA also demonstrates good performance with this search problem. Since the individuals in the FP-GA are coded by a real-value n-dimension vector, the FP-GA provides a more e cient way of preserving the information of the problem space. Test function 3 Figure 2.5 shows test function 3.

### Table 3. Improvement in compression of real data sets

2005

"... In PAGE 8: ...e, improvement factor = compressed size of original compressed size of reordered Thus, an improvement factor of 5 means, compressed reordered data takes 5 times less space than the com- pressed original. Table3 reveals the e ectiveness of our Gray code reordering algorithm on 7 data sets from various ap- plications. In this table, the rst three columns give the name of the problem, number of tuples, and num- ber of columns in the bitmap table, respectively.... In PAGE 9: ... The last two data sets are composed of document feature vectors from 20 newsgroups based on TF/IDF (Term Frequency-Inverse Document Fre- quency) followed by reduction based on SVD (Singular Value Decomposition). As seen in Table3 , compression rates are magni ed when the tuples are reordered with respect to Gray code ordering in all problem instances from all appli- cations. The compressed index size for data stock is 7 times less than the original after reordering.... ..."

Cited by 5