• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 16,535
Next 10 →

Table 2. Average retrieval precisions for the plain and the enhanced phrase queries, and the plain and the enhanced Boolean queries for the three indexing techniques.

in Retrieval Effectiveness Of Various Indexing Techniques On Indonesian News Articles
by Mirna Adriani, W. Bruce Croft
"... In PAGE 5: ...The results, as shown in Table2 . demonstrate that applying the #1 proximity operator to the phrase queries decreases the average retrieval precision for all of the indexing techniques.... ..."

Table 2. Evaluation results of disambiguous queries generated by the DDQ module.

in Study On Spoken Interactive Open Domain Question Answering
by Chiori Hori Takaaki, Takaaki Hori, Hideki Isozaki, Eisaku Maeda, Shigeru Katagiri
"... In PAGE 4: ...5. Evaluation results Table2 shows the evaluation results in terms of the appropriate- ness of DQs and the QA-system MRR. The results indicate that 49% of the DQs generated by the DDQ module based on recog- nition results were APPROPRIATE.... ..."

Table 5: Performance of Disambiguation Module on Ambiguous Calls

in unknown title
by unknown authors
"... In PAGE 6: ... (c) Reject, if the router could not form a sensi- ble query or was unable to gather sufficient in- formation from the user after its queries and routed the call to an operator. Table5 shows the number of calls that fall into each of the 5 categories. Out of the 157 calls, the router au- tomatically routed 115 of them either with or without disambiguation (73.... In PAGE 7: ...1 for unambiguous calls leads to the overall performance of the call router in Table 6. The table shows the number of calls that will be correctly routed, incorrectly routed, and rejected, if we apply the performance of the disambiguation module ( Table5 ) to the calls that fall into each class in the evaluation of the routing module (Section 5.1).... ..."

Table 7: Entity Disambiguation Results Jaguar Java

in retrieval and Natural Language Processing. Previous
by Danushka Bollegala, Mitsuru Ishizuka
"... In PAGE 9: ...ive clustering explained in section 4.7. We manually ana- lyzed the snippets for queries Java (3 senses: programming language, Island, cofiee) and Jaguar (3 senses: cat, car, op- erating system) and computed precision, recall and F-score for the clusters created by the algorithm. Our experimental results are summarized in Table7 . Pro- posed method reports the best results among all the base- lines compared in Table 7.... In PAGE 9: ... Our experimental results are summarized in Table 7. Pro- posed method reports the best results among all the base- lines compared in Table7 . However, the experiment needs to be carried out on a much larger data set of ambiguous entities in order to obtain any statistical guarantees.... ..."

Table 3. Average retrieval precision (and performance drop compared to monolingual) of the English queries translated from the German, the Spanish, and the Indonesian queries for the automatic disambiguation and the expanded automatic disambiguation methods.

in Term Similarity-Based Query Expansion for Cross-Language Information Retrieval
by Mirna Adriani, C. J. Van Rijsbergen 1999
"... In PAGE 8: ...98%) Applying the query expansion technique in combination with the sense disambiguation technique resulted in slight performance improvements. As can be seen in Table3 , the improvements were 0.... ..."
Cited by 3

Table 8: Average precision and precision at low recall for word-by-word, sense1, sense1 with post-translation expansion, parallel corpus disambiguation, parallel corpus disambiguation with post-translation expansion, co-occurrence disambiguation, and co- occurrence disambiguation with post-translation expansion.

in Resolving Ambiguity for Cross-language Retrieval
by Lisa Ballesteros 1998
"... In PAGE 6: ... Expan- sion was carried out after translation of queries via either the sense1, PLC, or CO methods. Table8 shows average precision values for seven query sets. As in the previous section, Word-by-word translation is used as a baseline.... ..."
Cited by 122

Table 8: Average precision and precision at low recall for word-by-word, sense1, sense1 with post-translation expansion, parallel corpus disambiguation, parallel corpus disambiguation with post-translation expansion, co-occurrence disambiguation, and co- occurrence disambiguation with post-translation expansion.

in Resolving Ambiguity for Cross-language Retrieval
by Lisa Ballesteros, W. Bruce Croft 1998
"... In PAGE 6: ... Expan- sion was carried out after translation of queries via either the sense1, PLC, or CO methods. Table8 shows average precision values for seven query sets. As in the previous section, Word-by-word translation is used as a baseline.... ..."
Cited by 122

Table 2.a: Average precision of the Indonesian to English translation queries for the simple translation, phrasal translation, automatic disambiguation, and combined phrasal translation amp; automatic disambiguation methods.

in Phrase Identification in Cross-Language Information Retrieval
by Mirna Adriani, C. J. Van Rijsbergen 2000
"... In PAGE 5: ...1 Results As can be expected, the retrieval performance of the queries produced using the simple translation method for the two languages were very poor as compared to that of the equivalent monolingual method. As can be seen in Table2 .a and 2.... In PAGE 6: ...0992 0.0954 Table2 .b: Average precision of the Spanish to English translation queries for the simple translation, phrasal translation, automatic disambiguation, and combined phrasal translation amp; automatic disambiguation methods.... ..."
Cited by 2

Table 2.b: Average precision of the Spanish to English translation queries for the simple translation, phrasal translation, automatic disambiguation, and combined phrasal translation amp; automatic disambiguation methods.

in Phrase Identification in Cross-Language Information Retrieval
by Mirna Adriani, C. J. Van Rijsbergen 2000
"... In PAGE 5: ...1 Results As can be expected, the retrieval performance of the queries produced using the simple translation method for the two languages were very poor as compared to that of the equivalent monolingual method. As can be seen in Table2 .a and 2.... In PAGE 6: ...1696 0.1692 Table2 .a: Average precision of the Indonesian to English translation queries for the simple translation, phrasal translation, automatic disambiguation, and combined phrasal translation amp; automatic disambiguation methods.... ..."
Cited by 2

Table 2: Evaluation results of disambiguating queries generated by the DDQ module. Word MRR w/o IN- SPK

in Spoken Interactive ODQA System: SPIQA
by Chiori Hori Takaaki, Takaaki Hori, Hajime Tsukada, Hideki Isozaki, Yutaka Sasaki, Eisaku Maeda
"... In PAGE 4: ... 3.4 Evaluation results Table2 shows the evaluation results in terms of the appropriateness of the DQs and the QA-system MRRs. The results indicate that roughly 50% of the DQs generated by the DDQ module based on the screened results were APPROPRIATE.... ..."
Next 10 →
Results 1 - 10 of 16,535
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University