### Table 1: Documents in the NTCIR-3 Patent Re- trieval Collection.

in ABSTRACT

"... In PAGE 2: ... However, those abstracts can also be used as target documents, because relevant abstracts can be identified from relevance assessments performed for applica- tions. Table1 shows the overview of patent documents in the NTCIR-3 Patent Retrieval Collection. Table 1: Documents in the NTCIR-3 Patent Re- trieval Collection.... ..."

### Table 5: Relationship between Transcript Accuracy and Re- trieval Performance

2002

"... In PAGE 15: ...we have compiled limited ground truth. Table5 shows the results: a consistently increasing MAP with increasing tran- script % Correct, which justifies our greater investment of effort in ASR this year. However, this result also suggests that we may not have reached an error rate at which retrieval is comparable to retrieval from ground truth transcriptions.... ..."

Cited by 8

### Table III. Timings (with XSB and tabling) for re- trieval with access control after normal and after aggressive specialisation.

2006

Cited by 1

### Table 12: SVDPACKC Sparse Matrix Test Suite. IR Information Re- trieval, LP Linear Programming.

"... In PAGE 37: ... 5 SVDPACKC Workstation Benchmarks In this section, we present sample SVDPACKC benchmarks on workstations such as the Macintosh II/fx and Sun-4/490. Model SVD problems using the sparse matrix test suite de ned in Table12 are solved. These benchmarks illustrate the typical elapsed user CPU time expired by the 8 SVDPACKC programs when computing several of the largest singular triplets of real sparse matrices arising from applications such as information retrieval.... In PAGE 37: ... 5.1 Sparse Matrix Test Suite The 29 matrices listed in Table12 , which arise from information retrieval and linear programming applications, were obtained from Apple Computer Inc., Cupertino.... In PAGE 37: ... The 16 remaining sparse rectangular matrices were extracted from a set of linear programming test problems compiled at Stanford Uni- versity [23]. From Table12 , we can see that all of these matrices are less than 1% dense. We note that r and c are the average number of nonzeros per row and column, respectively.... In PAGE 37: ... We note that r and c are the average number of nonzeros per row and column, respectively. The Density of each sparse matrix listed in Table12... In PAGE 42: ... We also provide tabulated results for both the Macintosh II/fx and Sun-4/490 in Tables 14 through 18 in Appendix A (Section 8) along with the number of approximated singular triplets, p, having residual norms (4) no larger than 10?6. Figures 4 through 7 (and Tables 14 through 17) re ect timings using the IR matrices from Table12 , while Figure 8 (and Table 18) show elapsed user CPU times... In PAGE 42: ... We also provide tabulated results for both the Macintosh II/fx and Sun-4/490 in Tables 14 through 18 in Appendix A (Section 8) along with the number of approximated singular triplets, p, having residual norms (4) no larger than 10?6. Figures 4 through 7 (and Tables 14 through 17) re ect timings using the IR matrices from Table 12, while Figure 8 (and Table 18) show elapsed user CPU times for sis2 and las2 on the 16 LP matrices from Table12... In PAGE 43: ...vdin.bench le5. As observed in [4] and [5], las2 is by far the fastest sequential method for computing several of the largest singular triplets of large sparse matrices. This, of course, assumes there is no loss of accuracy in approximating eigen- pairs of the matrix ATA, which is the case for the matrices comprising our test suite in Table12 . Among competitive Lanczos-based SVDPACKC meth- ods for computing several of the largest singular triplets of the IR matrices, las2 is on average 5 and 9 times faster than las1 and bls2, respectively, on the Macintosh II/fx.... ..."

### Table 12: SVDPACKC Sparse Matrix Test Suite. IR Information Re- trieval, LP Linear Programming.

"... In PAGE 37: ... 5 SVDPACKC Workstation Benchmarks In this section, we present sample SVDPACKC benchmarks on workstations such as the Macintosh II/fx and Sun-4/490. Model SVD problems using the sparse matrix test suite de ned in Table12 are solved. These benchmarks illustrate the typical elapsed user CPU time expired by the 8 SVDPACKC programs when computing several of the largest singular triplets of real sparse matrices arising from applications such as information retrieval.... In PAGE 37: ... 5.1 Sparse Matrix Test Suite The 29 matrices listed in Table12 , which arise from information retrieval and linear programming applications, were obtained from Apple Computer Inc., Cupertino.... In PAGE 37: ... The 16 remaining sparse rectangular matrices were extracted from a set of linear programming test problems compiled at Stanford Uni- versity [23]. From Table12 , we can see that all of these matrices are less than 1% dense. We note that r and c are the average number of nonzeros per row and column, respectively.... In PAGE 37: ... We note that r and c are the average number of nonzeros per row and column, respectively. The Density of each sparse matrix listed in Table12... In PAGE 42: ... We also provide tabulated results for both the Macintosh II/fx and Sun-4/490 in Tables 14 through 18 in Appendix A (Section 8) along with the number of approximated singular triplets, p, having residual norms (4) no larger than 10?6. Figures 4 through 7 (and Tables 14 through 17) re ect timings using the IR matrices from Table12 , while Figure 8 (and Table 18) show elapsed user CPU times... In PAGE 42: ... We also provide tabulated results for both the Macintosh II/fx and Sun-4/490 in Tables 14 through 18 in Appendix A (Section 8) along with the number of approximated singular triplets, p, having residual norms (4) no larger than 10?6. Figures 4 through 7 (and Tables 14 through 17) re ect timings using the IR matrices from Table 12, while Figure 8 (and Table 18) show elapsed user CPU times for sis2 and las2 on the 16 LP matrices from Table12... In PAGE 43: ...vdin.bench le5. As observed in [4] and [5], las2 is by far the fastest sequential method for computing several of the largest singular triplets of large sparse matrices. This, of course, assumes there is no loss of accuracy in approximating eigen- pairs of the matrix ATA, which is the case for the matrices comprising our test suite in Table12 . Among competitive Lanczos-based SVDPACKC meth- ods for computing several of the largest singular triplets of the IR matrices, las2 is on average 5 and 9 times faster than las1 and bls2, respectively, on the Macintosh II/fx.... ..."

### Table 1: Normalized recall using 3 3- grams (normalized recall= 1 for perfect re- trieval, gt; 1 otherwise).

1997

Cited by 6

### TABLE 3. Precision improvements over stems-only re- trieval based on TREC-5 data short queries long queries

### Table 1: Maximal and average precision rates for the re- trieval by the crude fuzzy normalized color histogram (2) with various proposed fuzzy set distances Maximal Average

"... In PAGE 4: ... The images were described by their color distribu- tion, expressed as a fuzzy histogram. Table1 presents the maximal (at recall 1) and average precision of the retrieval by the a29 a36... ..."

### Table 6: The average non-interpolated precision of Chinese-to-English cross-lingual MEAD summary re- trieval without considering context.

"... In PAGE 15: ... When di erent window size and passage size values are selected, performance of context-based cross-lingual retrieval always outperforms cross-lingual retrieval without context as well as mono-lingual retrieval at all the interpolated recall levels. lt; Table6 should be inserted here. gt; lt;Table 7 should be inserted here.... In PAGE 15: ... gt; Apart from retrieving full-length documents, we also conducted experiments on retrieval of MEAD automated summaries. Table6 depicts the Chinese-to-English cross-lingual MEAD summary retrieval performance without considering context in the query translation process. The precision value is 0.... ..."

### Table 12: The average non-interpolated precision of English-to-Chinese cross-lingual MEAD summary re- trieval without considering context.

"... In PAGE 17: ...ecall level is greater than 0.5. We also conducted experiments on English-to-Chinese retrieval of MEAD summaries. Table12 depicts the cross-lingual MEAD summary retrieval performance without considering context in the query translation process.... In PAGE 17: ... The compression ratio of the summary is 30%. lt; Table12 should be inserted here. gt;... ..."