• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 11 - 20 of 99,242
Next 10 →

Table 1. The TREC-7 collection for the ad-hoc task

in unknown title
by unknown authors
"... In PAGE 13: ... Notice that both types of matching are not one-to-one. Table1 lists the numbers of matched templates in two grammars. The last row lists the frequencies of the matched ExtG templates in the Treebank.... In PAGE 14: ...9% 40.1% 100% Table1 : Matched templates and their frequencies t-match c-match matched unmatched total subtotal subtotal XTAG 158 339 497 507 1004 ExtG 81 134 215 2675 2890 frequency 78.6% 3.... In PAGE 16: ..., 1994), a subset of the Wall Street Journal and the Brown Corpus was parsed with the grammar and then subjected to a detailed error analysis. The results of the evaluation are shown in Table1 . Based on this evaluation the grammar was updated to handle errors caused due to #1, #2, #3, #7, #12, #13 and #14.... In PAGE 16: ... Rank No.errors Category of error #1 11 Parentheticals/appositives #2 8 Time NP #3 8 Missing subcat #4 7 Multi-word construction #5 6 Ellipsis #6 6 Not sentences #7 3 Gapless Relative clause #8 2 Funny coordination #9 2 VP coordination #10 2 Inverted predication #11 2 Who knows #12 1 Missing entry #13 1 Comparative #14 1 Bare infinitive Table1 : Results of Corpus-Based (WSJ and Brown) Error Analysis In addition to the corpus-based evaluation, the sentences (and phrases) of the TSNLP (Test Suites for Natural Lan- guage Processing) English Corpus (Lehmann et al., 1996) were also parsed using the XTAG grammar (Doran et al.... In PAGE 18: ... It is important to note here that there was very little overlap in the kinds of changes identified in the two kinds of evaluation. Compare the errors found after parsing corpora in Table1 and the errors found after parsing a test-suite in Table 2. We get a similar variation in the evaluations conducted in this paper.... In PAGE 21: ...Test corpus Training corpus ILIAD platform (subpart for free and controlled indexing) Corpus-based acquisition of new terms Corpus-based recognition of terms and their variants FASTR: Candidate terms used by FASTR as reference terms on candidate terms produced by ACABIT Free indexing based on terms from the thesaurus Controlled indexing considered as a free indexing Candidate terms ACABIT: Table1 : The terminological part of the ILIAD system by proposing, for a given corpus, a list of candidate terms ranked, from the most representative of the domain to the least using a statistical score. Candidate terms which are extracted from the corpus belong to a special type of cooc- currences: AF the cooccurrence is oriented and follows the linear or- der of the text; AF it is composed of two lexical units which do not belong to the class of functional words such as prepositions, articles, etc.... In PAGE 32: ...) Inter- operability can be accessed from text processing compliance existing thesauri standard relationships (part/whole, effect etc.) security read/write access rights Table1 : Specification of Properties of Functions from Task The above example demonstrates how functional properties of systems can be elaborated from a mapping of tasks to quality characteristics. In a next step properties or attributes need to be mapped onto scales in order to allow the measurement of the extent to which a property has been fulfilled by a system under evaluation.... In PAGE 42: ...33 The TRECs include two main information retrieval tasks: the ad-hoc task1 and the routing task - see (Voorhees amp; Harman, 1999 amp; 2000) for descriptions. Table1 indicates the collection used for the TREC-7 ad- hoc task. Note that queries must be run against all the corpora at once (the collection as a whole).... In PAGE 73: ... Concerning the topics, about 50 topics were proposed for the first phase and more than 150 topics for the second campaign. Table1 gives an exact overview of the volume of data for the two phases according to the providers, and table 2 gives the number of topics: Le Monde INIST LRSA ELRA Phase 1 67 Mo 132 Mo - - Phase 2 66 Mo 154 Mo 4,5 Mo ... In PAGE 74: ...65 Table1 . The volume of data Le Monde INIST LRSA ELRA Phase 1 26 30 - - Phase 2 52 60 21 15 x 6 lg Table 2.... ..."

Table 1. The TREC-7 collection for the ad-hoc task

in unknown title
by unknown authors
"... In PAGE 13: ... Notice that both types of matching are not one-to-one. Table1 lists the numbers of matched templates in two grammars. The last row lists the frequencies of the matched ExtG templates in the Treebank.... In PAGE 14: ...9% 40.1% 100% Table1 : Matched templates and their frequencies t-match c-match matched unmatched total subtotal subtotal XTAG 158 339 497 507 1004 ExtG 81 134 215 2675 2890 frequency 78.6% 3.... In PAGE 16: ..., 1994), a subset of the Wall Street Journal and the Brown Corpus was parsed with the grammar and then subjected to a detailed error analysis. The results of the evaluation are shown in Table1 . Based on this evaluation the grammar was updated to handle errors caused due to #1, #2, #3, #7, #12, #13 and #14.... In PAGE 16: ... Rank No.errors Category of error #1 11 Parentheticals/appositives #2 8 Time NP #3 8 Missing subcat #4 7 Multi-word construction #5 6 Ellipsis #6 6 Not sentences #7 3 Gapless Relative clause #8 2 Funny coordination #9 2 VP coordination #10 2 Inverted predication #11 2 Who knows #12 1 Missing entry #13 1 Comparative #14 1 Bare infinitive Table1 : Results of Corpus-Based (WSJ and Brown) Error Analysis In addition to the corpus-based evaluation, the sentences (and phrases) of the TSNLP (Test Suites for Natural Lan- guage Processing) English Corpus (Lehmann et al., 1996) were also parsed using the XTAG grammar (Doran et al.... In PAGE 18: ... It is important to note here that there was very little overlap in the kinds of changes identified in the two kinds of evaluation. Compare the errors found after parsing corpora in Table1 and the errors found after parsing a test-suite in Table 2. We get a similar variation in the evaluations conducted in this paper.... In PAGE 21: ...Test corpus Training corpus ILIAD platform (subpart for free and controlled indexing) Corpus-based acquisition of new terms Corpus-based recognition of terms and their variants FASTR: Candidate terms used by FASTR as reference terms on candidate terms produced by ACABIT Free indexing based on terms from the thesaurus Controlled indexing considered as a free indexing Candidate terms ACABIT: Table1 : The terminological part of the ILIAD system by proposing, for a given corpus, a list of candidate terms ranked, from the most representative of the domain to the least using a statistical score. Candidate terms which are extracted from the corpus belong to a special type of cooc- currences: AF the cooccurrence is oriented and follows the linear or- der of the text; AF it is composed of two lexical units which do not belong to the class of functional words such as prepositions, articles, etc.... In PAGE 32: ...) Inter- operability can be accessed from text processing compliance existing thesauri standard relationships (part/whole, effect etc.) security read/write access rights Table1 : Specification of Properties of Functions from Task The above example demonstrates how functional properties of systems can be elaborated from a mapping of tasks to quality characteristics. In a next step properties or attributes need to be mapped onto scales in order to allow the measurement of the extent to which a property has been fulfilled by a system under evaluation.... In PAGE 42: ...33 The TRECs include two main information retrieval tasks: the ad-hoc task1 and the routing task - see (Voorhees amp; Harman, 1999 amp; 2000) for descriptions. Table1 indicates the collection used for the TREC-7 ad- hoc task. Note that queries must be run against all the corpora at once (the collection as a whole).... In PAGE 73: ... Concerning the topics, about 50 topics were proposed for the first phase and more than 150 topics for the second campaign. Table1 gives an exact overview of the volume of data for the two phases according to the providers, and table 2 gives the number of topics: Le Monde INIST LRSA ELRA Phase 1 67 Mo 132 Mo - - Phase 2 66 Mo 154 Mo 4,5 Mo ... In PAGE 74: ...65 Table1 . The volume of data Le Monde INIST LRSA ELRA Phase 1 26 30 - - Phase 2 52 60 21 15 x 6 lg Table 2.... ..."

Table 1: Automatic Ad Hoc Results for 50 Queries

in TREC-7 Ad-Hoc, High Precision and Filtering Experiments using PIRCS
by K. L. Kwok, Kwok Grunfeld, M. Chan, N. Dinstl, C. Cool 1998
"... In PAGE 2: ... Some of these documents are ranked high and they are employed to expand the queries during pseudo- relevance feedback. Results and Discussion Our TREC-7 results for short and medium queries are summarized in Table1 under the columns Title and Desc. These runs are named pirc8At and pirc8Ad respectively.... In PAGE 4: ... It seems preferable to do both. Comparing the results in Table1 and 2, it can be seen that for this set of queries and documents, long queries are uniformly preferable in average precision to short (between 6% to 18% better), and medium queries performed... In PAGE 10: ...p.4-11. Method Training data Term selection Avg. number of terms Training method Retrieval (not1) none All terms from query subject to Zipf cutoff [3,10000] 32 Self training based on statistics Full ap89 and ap90 (not2) none same same same same (pir1) All relevant Term expansion ~250 220 Pircs network same (only for queries with training docs) (pir2) same Term expansion ~60 60 same same (gt1) Half for term selection Other half for training Ga selects From 300 terms 90 Ga selects terms for pircs same (gw1) same Term expansion ~100 80 Ga adjusts weights for pircs same (gw2) same Term expansion ~100 80 same same (ga) same ~300 terms 100 doubles triples and quadruples 410 Genetic algorithm 1500 documents retrieved by pir1 (bp) same same same backprop same Table1... ..."
Cited by 3

Table use Ad-Hoc

in A Methodology for the Analysis and Design of Multi-Agent Systems using JADE
by Magid Nikraz A, Giovanni Caire B, Parisa A. Bahri A

Table 1: Topics and associated data. Ad hoc retrieval topics, number of relevant documents, and average results for all runs.

in Enhancing access to the Bibliome: the TREC 2004 Genomics Track
by William R Hersh, Ravi Teja Bhupatiraju, Laura Ross, Phoebe Roberts, Phoebe Roberts, Aaron M Cohen, Dale F Kraemer 2006
"... In PAGE 7: ... Results A total of 27 research groups submitted 47 different runs. Table1 shows the pool size, number of relevant docu- ments, mean average precision (MAP), average precision at 10 documents, and average precision at 100 documents for each topic. (Precision at 100 documents is potentially compromised due to a number of topics having many fewer than 100 relevant documents and thus being unable to score well with this measure no matter how effective they were at ranking relevant documents at the topic of list.... In PAGE 7: ... (Precision at 100 documents is potentially compromised due to a number of topics having many fewer than 100 relevant documents and thus being unable to score well with this measure no matter how effective they were at ranking relevant documents at the topic of list. However, as noted in Table1 , the mean and median number of relevant documents for all topics was over 100 and, as such, all runs would be affected by this issue.) The results of the duplicate judgments for the kappa sta- tistic are shown in Table 2.... In PAGE 8: ... Figure 2 shows the official results graph- ically with annotations for the first run statistically signif- icant from the top run as well as the OHSU quot;baseline. quot; As typically occurs in TREC ad hoc runs, there was a great deal of variation within individual topics, as is seen in Table1 . Figure 3 shows the average MAP across groups for each topic.... ..."

Table 1 Sample Trace Data for Ad-Hoc Routing Distance Velocity PCR PCH ...

in Intrusion Detection Techniques for Mobile Wireless Networks
by Yongguang Zhang, Wenke Lee, Yi-an Huang
"... In PAGE 9: ...f mobile networks (i.e., the number of nodes/routes is not xed). Table1 shows some ctional trace data for a node. During the \training quot; process, where a diversity of normal situations are simulated, the trace data is gath- ered for each node.... ..."

Table 13. Data utilisation ratios of LOP-ALL using ad hoc selection methods compared to random sampling.

in Cambridge University Press Active Learning and Logarithmic Opinion Pools for HPSG Parse Selection
by Jason Baldridge, Miles Osborne 2006
"... In PAGE 25: ... It is also important to consider sequential selection, which is a default strategy typically adopted by annotators for many domains. Table13 shows the results of testing LOP-ALL with sequential sampling and sampling by shorter sentences and longer sentences. Sampling by lower and higher ambiguity was for the most part on par with sampling by shorter and longer sentences, respectively.... ..."

Table 10: Automatic ad hoc results

in Okapi at TREC-3
by S.E. Robertson, S. Walker, S. Jones, M.M. Hancock-Beaulieu, M. Gatford 1995
"... In PAGE 12: ... citya2 is the same without expansion or passage retrieval. Table10 gives the o#0Ecial City ad hoc results. citya1 did better than citya2 on 35 of the topics.... ..."
Cited by 264

Table 2: Comparison between ToXgene and dbgen, using ad-hoc queries.

in Toxgene: An extensible template-based data generator for XML
by Denilson Barbosa, Alberto Mendelzon, John Keenleyside, Kelly Lyons 2002
"... In PAGE 9: ... Table 1 shows the results of some queries from the TPC-H workload, the first three queries in the table return a fixed number of tuples. Table2 shows the ad-hoc queries we ran. All random values in the TPC-H data set are generated using uniform probability distributions.... ..."
Cited by 11

Table1: UDP payload packets sent between ad hoc nodes with WEP encryption

in Throughput Analysis of WEP Security in Ad Hoc Sensor Networks ABSTRACT
by Mohammad Saleh, Iyad Al Khatib
"... In PAGE 4: ... The UDP payload starts at 32B and is incremented by 32B until the maximum data limit that can be sent over IEEE802.11b with WEP, which is 2304B (see Table1 ). Table 1 and Table 2 show a difference between the maximum number of bytes we can send when WEP is enabled and when WEP is disabled, because thee are 8 bytes increase in the payload as shown in Figure 3.... ..."
Next 10 →
Results 11 - 20 of 99,242
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University