### Table 5.3: Comparison on the average ow time per order (in minutes) by a single bucket brigade and zone-picking with difierent bucket sizes

2005

Cited by 1

### Table 2: Some Methods for Bucket Interaction

1999

"... In PAGE 9: ...dynamic presentation templates that can exploit known semantics during presentation. Table2 provides a glimpse of bucket interaction. These methods are all invoked on a single test bucket filled with typical NASA data, but they could be invoked on any bucket.... In PAGE 9: ...resentation. Table 2 provides a glimpse of bucket interaction. These methods are all invoked on a single test bucket filled with typical NASA data, but they could be invoked on any bucket. For example, all the methods in Table2 could be invoked on: http://www.... ..."

Cited by 7

### Table 2: Some Methods for Bucket Interaction

1999

"... In PAGE 9: ...dynamic presentation templates that can exploit known semantics during presentation. Table2 provides a glimpse of bucket interaction. These methods are all invoked on a single test bucket filled with typical NASA data, but they could be invoked on any bucket.... In PAGE 9: ...resentation. Table 2 provides a glimpse of bucket interaction. These methods are all invoked on a single test bucket filled with typical NASA data, but they could be invoked on any bucket. For example, all the methods in Table2 could be invoked on: http://www.... ..."

Cited by 7

### Table 1 Maximum number of buckets that can be stored (dimensionality = 3) Memory size STHoles (single-precision) STHoles (double-precision) STHoles+

2006

"... In PAGE 12: ...nd each bucket is specified by a pair of coordinates (i.e., the start point and end point of the major diagonal), the total amount of memory required to specify the location of a bucket is 2ddlog2ke bits. Table1 compares STHoles and STHoles+ in terms of the number of buckets that can be stored within the same amount of memory, assuming a three-dimensional data space. The calculations are based on a quanti- zation resolution of 256 (the default value used in our experiments), a single-precision floating-point number size of 32 bits, and a double-precision floating-point number size of 64 bits.... ..."

### Table 1: Hash Lookup Performance for selecting dis- tinct single-column keys (20M tuples, 512 buckets). Pentium4 Itanium2

"... In PAGE 4: ... 2.5 Micro-Benchmarks Table1 shows lookup performance on our 3GHz Pentium4 Xeon (16KB L1, 1MB L2) and 1.3GHz Itanium2 (16KB L1, 256KB L2.... ..."

### Table 2: BIG BUCKETS MODELS

1998

"... In PAGE 25: ... For bc ? prod, we call one round of the specialised inequalities, and then one round as for bc ? opt. Computation on Single Level BB Instances Results for the BB instances are presented in Table2 . For each of the Tables 2 - 5, the column headings are as follows: instance identi es the instance, code identi es the system used in order to solve the problem, LP is the initial LP value before adding cuts at the root node of the Branch- and-Bound algorithm; XLP is the LP value after adding cuts at the root node of the Branch-and-Bound algorithm, IP is the value of the best feasible solution found within a time limit of 2 hours for all except the tr and ches problems (if there is no proof of optimality for this solution, this is indicated with a *), Secs if the CPU time in seconds (if there is no proof of optimality for this solution, this is indicated with a *), #cuts add is the total number of cuts added at the top node in the matrix, #cuts del is the total number of cuts deleted at the top node from the matrix, Gap is the nal duality gap based on the value of the best feasible solution IP and the best dual bound (DB) available.... In PAGE 25: ... For a minimization problem, gap=IP?DB IP 100%. We now brie y discuss the results in Table2 . The rst four instances are easily solved.... ..."

Cited by 4

### Table 5: Elapsed time and time spent on merging in seconds to construct a segmented word-level inverted les with the multiple-bucket single-pass inversion approach for the reference collection Web X for di erent values of b using burst tries to organise the in-memory buckets. The rst table entry with b = 1 refers to the times measured with the original single-pass inversion approach. Main memory usage is limited to 64 Mb and the maximal size of each input bu er during merging is 2.5 Mb.

2003

"... In PAGE 27: ... We do not use hashing to organise buckets since the storage requirements for b hash tables would amount to a signi cant portion of the available main memory degrading overall performance. Table5 summarises our experimental results. As shown, using multiple buckets slightly de-... ..."

Cited by 22

### Table 3. Data Bucket Recovery Times per Iteration (in ms).

2003

"... In PAGE 6: ...able 2: Logarithms for GF (256)......................................................................................................................... 27 Table3 .... In PAGE 47: ... A single iteration recovers all the missing data records in a record group. Table3 shows the results of some of our experiments. Parameter k refers to the number of recovered data buckets.... ..."

### Table 1: The number of bonds available in each sector-by-rating bucket on May 31, 2001.

"... In PAGE 4: ...RISK MODEL AND DATA We model credit risk using a multi-factor approach, as follows. Starting with a pool of investment grade bonds denominated in a single currency, we partition the pool into buckets comprised of all bonds sharing the same rating and sector classification (see Table1 ). These buckets define our factors: Financial AAA, Utility A, etc.... ..."

### Table 1 presents creation times for a parity bucket (PB) of 31 250 records.

2004

"... In PAGE 2: ... In any case, our experiments show that the processing overhead of Reed-Solomon is small. (P) 0001 0001 0001 0001 0001 eb9b 2284 9e44 0001 2284 9e74 d7f1 (Q) 0000 0000 0000 0000 0000 5ab5 e267 784d 0000 e267 0dce 2b66 Table1 : Our demo matrices P and Q for m = 4 and k = 3. In order to reconstruct lost records in a record group, we gather the columns of P corresponding to available records in a matrix H, invert H with the Gaussian algorithm, multiply each available record with a coefficient of H and XOR the results together to obtain a missing record.... In PAGE 3: ... The recovery of a single data bucket (DB) uses the first parity bucket and consequently the XOR decoding only. The first line of Table1 presents this case. Alternatively, the recovery can use another parity bucket, applying the RS decoding (with XORing and Galois field multiplications).... ..."

Cited by 5