### Table 2. Number of samples needed to distinguish Pomaranch Version 2 according to Theorem 1.

"... In PAGE 10: ...788. Using Theorem 1 for different values of p gives Table2 . For the 80-bit variant the computational complexity and the number of samples is 256.... ..."

### Table 2: Proof of Theorem 3.2.

"... In PAGE 5: ... Let c S;#0F and c X;#0F refer to the closest gridpoint functions of c S and c X , respectively. We now explain the chain of inequalities as shown in Table2 needed for the proof. Note that by the sample size given in the statement of the theorem wehave uniform convergence for each f 2 F #0F 6 .... In PAGE 5: ... Thus the sample and true costs for each gridpointor#0F-net clustering are close. This implies that the values #282#29 and #283#29 as well as the values #285#29 and #286#29 in Table2 are close. Further, the #28sample or true#29 cost of any clustering and its nearest gridpoint clustering is no more than #0F 6 , hence the values #281#29 and #282#29 as well as the values #283#29 and #284#29, as well as the values #286#29 and #287#29 are close.... ..."

### Table 1 The duality of maximum entropy and maximum likelihood is an example of the more general

1996

"... In PAGE 10: ... This result provides an added justi cation for the maximum entropy principle: if the notion of selecting a model p ? on the basis of maximum entropy isn apos;t compelling enough, it so happens that this same p ? is also the model which, from among all models of the same parametric form (10), can best account for the training sample. Table1 summarizes the primal-dual framework wehave established. 3.... ..."

Cited by 614

### Table 1: Proof of Theorem 3.1.

"... In PAGE 4: ...onvergence #28Lemma 3.1#29. The middle steps of the proof use properties of the type of clustering computed. The sequence of steps is shown in Table1 . The rows of the table correspond to the sample and true cost and the columns correspond to the di#0Berent clusterings.... In PAGE 4: ... We apply uniform convergence #28Lemma 3.1#29 to the two clusterings ^ d s and d X to obtain that the values #281#29 and #282#29 as well as the values #285#29 and #286#29 in Table1 are close. Observe that the sample cost of ^ d S is within a factor of #0B of d s since we ran an #0B-approximation algorithm on the sample S, hence the inequalitybetween #282#29 and #283#29 in Table 1.... ..."

### Table 1: Some sample proof times for areas of different sizes with squared tiles. Absolute times could be probably improved by using a propositional theorem prover.

Cited by 32

### Table 1: Some sample proof times for areas of different sizes with squared tiles. Absolute times could be probably improved by using a propositional theorem prover.

Cited by 32

### Table 1: Efiective sample size m vs. the original sample size n, when = 0. The asymptotic ratio from the theorem is 0:25. These results are based on 10 runs. n 50 100 200 500

"... In PAGE 9: ... To see how the efiective sample size in (5) depends on n we have done 2 simulations. The results from the flrst are in Table1 . In this case we have taken the true value 0 to be 0 and let the number of samples range from 50 to 500.... ..."

### Table 1: Efiective sample size m vs. the original sample size n, when = 0. The asymptotic ratio from the theorem is 0:25. These results are based on 10 runs. n 50 100 200 500

2004

"... In PAGE 10: ... To see how the efiective sample size in (5) depends on n we have done 2 simulations. The results from the flrst are in Table1 . In this case we have taken the true value 0 to be 0 and let the number of samples range from 50 to 500.... ..."

### Table 1: Symbols and Their De nitions 3.3 Theorems We postulate that the user has an \ideal quot; vector ~ q in mind, and that the distance of the sample vectors xi from this ideal vector ~ q is an generalized ellipsoid distance. Our goal is to \guess quot; ~ q and M to minimize the penalties. Obviously important samples (i.e., samples with high goodness scores vi) should have small distance from ~ q. Thus, the problem is mathematically formulated as follows:

"... In PAGE 8: ...1 0 1 0 1 q q q ellipsoid distance generalized weighted Euclidean Euclidean Figure 2: Isosurfaces for Distance Functions 3.2 Method Table1 gives a list of symbols used in the following discussion. The proposed distance function is D(~x; ~ q) = (~x ? ~ q)TM(~x ? ~ q); (2) or, equivalently D(~x; ~q) = n Xj n Xk mjk(xj ? qj)(xk ? qk); (3) where ~ q = [q1; : : :; qn]T is the \ideal quot; point, an n-d query vector and ~ x = [x1; : : :; xn]T is a fea- ture vector that corresponds to an entry in a database and apos;T apos; indicates matrix transposition.... ..."

### Table 1: Symbols and Their De nitions 3.3 Theorems We postulate that the user has an \ideal quot; vector ~ q in mind, and that the distance of the sample vectors xi from this ideal vector ~ q is an generalized ellipsoid distance. Our goal is to \guess quot; ~ q and M to minimize the penalties. Obviously important samples (i.e., samples with high goodness scores vi) should have small distance from ~ q. Thus, the problem is mathematically formulated as follows:

"... In PAGE 8: ...1 0 1 0 1 q q q ellipsoid distance generalized weighted Euclidean Euclidean Figure 2: Isosurfaces for Distance Functions 3.2 Method Table1 gives a list of symbols used in the following discussion. The proposed distance function is D(~x; ~ q) = (~x ? ~ q)TM(~x ? ~ q); (2) or, equivalently D(~x; ~q) = n Xj n Xk mjk(xj ? qj)(xk ? qk); (3) where ~ q = [q1; : : :; qn]T is the \ideal quot; point, an n-d query vector and ~ x = [x1; : : :; xn]T is a fea- ture vector that corresponds to an entry in a database and apos;T apos; indicates matrix transposition.... ..."