Results 1 - 10
of
23,831
Table 4: Average precision of individual retrieval system Since ranges of similarity values of different retrieval results are quite different, we normalize each retrieval result before combining them. The bound of each retrieval result is mapped to [0,1] with the following formula [Lee 97].
"... In PAGE 6: ... We randomly pick seven ranked lists for our fusion experiments. The tags and average precision are listed in Table4 . It is noticed that the average precision is similar except HIN300.... In PAGE 8: ...usion results on the same data set are listed in Table 7. The last row is the average precision of all combination pairs. Since the average precision of the seven retrieve results is 0.3086 (see Table4 ), each of these three fusion methods has improved the average precision significantly.... ..."
Table 1: Retrieval accuracy: for each rank range and each measure we indicate the percentage of test images for which the rank of the highest ranking correct match was in the given range. 2-step stands for the two-step retrieval algorithm described in Section 5.
2002
"... In PAGE 16: ...anking correct match that was retrieved for that image. 1 is the highest possible rank. A perfect similarity measure, under this definition of accuracy, would always assign rank 1 to one of the correct matches for every test image; it would never assign rank 1 to an incorrect match. Table1 shows the distribution of the highest ranking correct matches for our test set. We should note that, although the accuracy using the chamfer measure is comparable to the accuracy using the two-step retrieval algorithm, the two-step algorithm is about 100... ..."
Cited by 30
Table 1: Retrieval accuracy: for each rank range and each measure we indicate the percentage of test images for which the rank of the highest ranking correct match was in the given range. 2-step stands for the two-step retrieval algorithm described in Section 5.
"... In PAGE 16: ...anking correct match that was retrieved for that image. 1 is the highest possible rank. A perfect similarity measure, under this definition of accuracy, would always assign rank 1 to one of the correct matches for every test image; it would never assign rank 1 to an incorrect match. Table1 shows the distribution of the highest ranking correct matches for our test set. We should note that, although the accuracy using the chamfer measure is comparable to the accuracy using the two-step retrieval algorithm, the two-step algorithm is about 100... ..."
Table 1: Retrieval accuracy: for each rank range and each mea- sure we indicate the percentage of test images for which the rank of the highest ranking correct match was in the given range. 2- step stands for the two-step retrieval algorithm described in Sec- tion 4.
"... In PAGE 5: ... 1 is the highest possible rank. Table1 shows the distribution of the highest ranking cor- rect matches for our test set. We should note that, although the accuracy using the chamfer measure is comparable to the accuracy using the two-step retrieval algorithm, the two-step algorithm is about 100 times faster than simply applying the chamfer distance to each database view.... ..."
Table 1: Retrieval accuracy: for each rank range and each mea- sure we indicate the percentage of test images for which the rank of the highest ranking correct match was in the given range. 2- step stands for the two-step retrieval algorithm described in Sec- tion 4.
"... In PAGE 5: ... 1 is the highest possible rank. Table1 shows the distribution of the highest ranking cor- rect matches for our test set. We should note that, although the accuracy using the chamfer measure is comparable to the accuracy using the two-step retrieval algorithm, the two-step algorithm is about 100 times faster than simply applying the chamfer distance to each database view.... ..."
Table 4: Retrieval results.
2000
"... In PAGE 13: ... We assume that a video location depicting the search text is retrieved correctly if at least one frame of the frame range has been retrieved in which the query text appears. Table4 depicts the measured average values for recall and precision. They are calculated from the measured recall and precision values, using each word that occurs in the video samples as a search string.... ..."
Cited by 25
Table 4: Retrieval results.
2000
"... In PAGE 13: ... We assume that a video location depicting the search text is retrieved correctly if at least one frame of the frame range has been retrieved in which the query text appears. Table4 depicts the measured average values for recall and precision. They are calculated from the measured recall and precision values, using each word that occurs in the video samples as a search string.... ..."
Cited by 25
Table 4: Short queries with w = 0:8 for TREC data A summary of the results from Tables 2 to 5 are given as follows. 1. The method gives very good retrieval e ectiveness for short queries. For the Stanford data, the percentages of the m most relevant documents retrieved range from 96% to 98%; the corresponding gures for the TREC data are from 88% to 96%. Recall that the short queries 13
Table 3: Retrieved thicknesses and quadratic errors obtained by FFM and PUMA for the very thin computer-generated films (thicknesses ranging from 10nm up to 70nm).
2002
"... In PAGE 7: ... Figure 8 shows the generated transmittances for each of these thicknesses. Table3 shows the estimated thickness obtained by PUMA and FFM for that films. Figures 9 and 10 show the estimated values for the refractive index and the absorption coefficient obtained by both methods.... ..."
Table 10: SDR Runs: Document Expansion, Algorithm-1 average precision values in the range for 0.5431 to 0.5600. There is an insigni cant 3% gap in average precision between doing retrieval on closed caption vs. doing retrieval on ASR transcripts which have up to 32% WER.
2000
"... In PAGE 10: ... doing conservative collection enrichment. For example, when doing retrieval from closed caption (second row in Table10 ), doing query expansion from print news yields an average precision of 0.5742, whereas our conservative query expansion yields only 0.... In PAGE 10: ...nly 0.5390, a noticeable drop. 4. Document expansion (see Table10 ) is consistently bene cial. For example, our reference run att-r1 would have been 0.... In PAGE 12: ... We submitted a second run based on this document expansion scheme and the results are shown in Table 11. Comparing this document expansion algorithm att-s2 (Algo-2) with our previous algorithm att-s1 (Algo-1) in Table10 , we do see that this algorithm yields consistently better results than our old algorithm. It in fact yields the best results for every transcription, which are shown in column-4 of Table 11 Run Average Precision Best gt;= Median lt; Median att-r1 0.... ..."
Cited by 23
Results 1 - 10
of
23,831