### Table 7: Classification accuracy of predicting whether a reformulation will likely lead to a spe- cialization or generalization.

2007

"... In PAGE 8: ...) as the change in the length of the query provides information about the type of reformulation. Table7 gives the classification performance for these mod- els. We found that the performance of models learned from cases generated from the URL and domain graph projec- tions is about the same.... ..."

Cited by 1

### Table 1 gives rise to many new TCP apos;s. These multiplications can be applied to any BCP or TCP A, B with zeros in the same positions. The multiplications in general lead to TCP apos;s since P and Q are ternary sequences. Some of these new TCP apos;s will have their zeros in the same positions which means that we can apply these multiplications recursively.

1996

"... In PAGE 9: ... The rst class of equivalences is shown in Example 2. For the second class let A1 = A; B1 = B: NowP1 = 1 2(A1 + B1) = 1 2(A ? B) = Q; Q1 = 12(A1 ? B1) = 1 2(A + B) = P: 2 Results: Table1 shows some results obtained. Shorter multiplications (m = 3; 4) were a matter of seconds or minutes on the computer while longer multiplications (m = 7) took several CPU{days of computer{time.... In PAGE 10: ...f Examples m f Examples 3 4 P A Q 6 10 AB QP B A P B Q AB QP B A 3 5 AQB 6 10 A Q B BP A AP B A Q B BP A 4 5 P QA Q 7 8 PP PA Q Q Q P BP Q P P P BQ Q Q 5 8 P A AB Q 7 8 PP Q AP Q Q P A BB Q P P Q BP Q Q 5 8 PB ABQ 7 8 P Q PB QP Q P A B A Q P Q P AQP Q 5 8 P AA AQ 7 10 A QAQ Q QB P B B BQ A P PP B P B 6 8 Q AP QBP 7 10 Q AAP QBP Q AP Q B P Q AP QB B P 6 8 Q A P P BQ 7 10 PQA ABP Q Q B P P A Q P Q A BBP Q 6 8 QP AB P Q 7 10 QP QAAB P Q P A B P Q Q ABB P Q P 6 8 P P AB QQ 7 10 P QP ABAQ P P A B Q Q PB A B Q P Q 6 10 AAPQBB 7 10 P BQPBA Q A AP QB B PB A QP A Q 6 10 AQB APB 7 10 PAP A QBQ A Q B APB P A P B QBQ Table1 : Multiplications obtained via computer.... ..."

Cited by 3

### Table 1. Peak throughput of three network applications for leading general-purpose processors with different internal bus bandwidths. The external (e.g., PCI) bus is assumed to be 64 bits wide and operates at 133 MHz with a 1066 MBytes/sec bandwidth.

"... In PAGE 6: ... Then optimistic upper-bound on throughput for each application (in MBytes/sec) is given as: (Throughput)IP = Be/2 (11) (Throughput)HTTP = BiBe/(5Be + Bi) (12) (Throughput)RTP = BiBe/(4Be + Bi) (13) We can use these upper-bounds on throughput for several leading general-purpose microprocessors to calculate peak possible data throughput. These calculations are listed in Table1 . Using these calculations, we can conclude that all of the leading microprocessor based systems are capable of delivering more than 2 Gbytes/sec throughput for all three applications.... In PAGE 7: ...have noticed, cache miss rates for some network applications may even be higher than typical compute- intensive applications due to spatial locality of accessing blocks of contiguous data [10]. Although the peak performance estimates presented in Table1 indicate that using a general-purpose processor based server for high-throughput network infrastructure devices is possible, there are some technical challenges that remain to be overcome. Eliminating excessive software overhead on a general-purpose computing platform is a significant challenge.... ..."

### Table 3.1: Peak throughput of three network applications for leading general- purpose processors with different internal bus bandwidth. The external (e.g., PCI) bus is assumed to be 64 bits wide and operates at 133 MHz with a 1066 MBytes/Sec bandwidth.

in Date

2003

### Table 6.1 Well-defined feedback modules involving negative, positive, or both types of regulation. In general, an increasing number of variables and more complex connectivity leads to richer dynamics.

### Table 1: Leading Indicator Lags for GDP

"... In PAGE 10: ...Table1... In PAGE 13: ... s is the standard error of the regression. bi represents the coefficient on the i -th regressor (not the coefficient on the i -th lag, see Table1 for the appropriate lags). Standard errors are given inside parentheses.... In PAGE 14: ...210 .558 Notes : b1 i and b0 i correspond to regime 1 and regime 0 respectively, with i =0 giving the constant transition probability while i =1 is the lagged leading indicator coefficient (see Table1 for appropriate lags). Standard errors are given inside parentheses.... In PAGE 15: ...194 .547 Notes : b1 i and b0 i correspond to regime 1 and regime 0 respectively, with i =0 giving the constant transition probability while i =1 is the lagged leading indicator coefficient (see Table1 for appropriate lags). Standard errors are given inside parentheses.... ..."

### Table 4. Table K.3 for luminance DC coefficients in the JPEG Standard In this case, the prefix templates may be chosen such that it is either zero or a string of ones. The template matching procedure in this case is reduced to counting the number of leading ones (or leading zeros for zero-leading tables). This procedure is in general very efficient for Huffman tables used in video and image standard. However, in most audio standards, minimum variance Huffman tables are frequently encountered and these templates will be memory inefficient.

### Table 5: Dependence of the scaling law exponent, m , on the distribution of agent preferences Overall, the general power-law character of the size distributions remain, although changes in the distribution of preferences have significant effect. CES preferences yield power law exponents that are comparable to those previously obtained, leading to the conclusion that the general character of the model does not depend sensitively on the functional forms employed.

1999

"... In PAGE 7: ...eturns term..................................................................................................................72 Table5 : Dependence of the scaling law exponent, m , on the distribution of agent preferences.... ..."

### Table 1. Leading terms in the computational and memory costs. These are associated with the two-term and general versions of the MFLAP and PFLAP algorithms for both computations (A) and (B). For (B) we only include the increment to go from bf (A) to (B). See text for more details.

"... In PAGE 6: ... Since we can overlap these two functions, only 2` ? 1 storage is required. Below in Table1 we summarize these complexity and storage results. In Table 1 the \general quot; method assumes optimization appropriate to the while the \two- term quot; method assumes we save work and space as mentioned above.... ..."

### Table 13: Reducing the set of critical pairs The table shows that the additional e ort spent on reducing all the unselected equations held by the system does in general not lead to a signi cant speedup. As the times use to get smaller when the intervals grow we believe that the small number of critical pairs that could be reduced, reweighteded or removed does not justify the e ort. However, there are situations when this approach can result in signi cant speedups: sometimes the system produces a rule or equation that diminishes the set of rules and equations heavily via interreduction. In these cases the rewrite relation has gathered so much additional strength that many critical pairs can be reduced, reweighted, and thus a signi cant reordering is enforced. For example, ra3 was solved faster in the settings \r 50 quot; resp. \r 100 quot;. Another result that can be drawn from Table 13 is that applying IRP in order to save memory by deleting critical pairs whenever they can be joined will not lead to a signi cantly smaller size of the proof process. Compared with the huge number of

1996

"... In PAGE 46: ... that many unselected equations might have become joinable by now and thus the number of equations that have to be held in the set can be reduced signi cantly inducing less memory consumption. Have a look at Table13 where the letters in the leftmost column correspond to Table 11, whereas the numbers state the interval: res 50 means that IRP was executed... ..."

Cited by 12