• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 16,401
Next 10 →

Table 1. Automatically collected context cues

in The Connector service — predicting availability in mobile contexts
by Maria Danninger, Tobias Kluge, Erica Robles, Leila Takayama, Qianying Wang, Rainer Stiefelhagen, Clifford Nass, Alex Waibel 2006
"... In PAGE 8: ... A large number of different context clues were collected from the participants. Automatically collected context cues included PC activity and information about location in offices based on the analysis of video-streams from cameras installed in our research labs (see Table1 . Next to that, participants were asked to manually enter availability feed-back every 20 minutes, an overview of self-reported context cues is... ..."
Cited by 2

Table 6: Automatic stemming performance compared to the majority and unique baseline collections.

in Stemming Indonesian
by Jelita Asian, Hugh E. Williams, S. M. M. Tahaghoghi
"... In PAGE 5: ... However, the results do not show any change in relative performance between the schemes and we omit the results from this paper for compactness. 4 Results Table6 shows the results of our experiments using the majority and unique collections. The nazief scheme works best: it correctly stems 93% of word oc- currences and 92% of unique words, making less than two-thirds of the errors of the second-ranked scheme, ahmad2a.... ..."

Table 4: Gain G for 4 LSU segmentation methods performed on collection II, based on automatic shot segmentation ^ V .

in
by unknown authors

Table 7.1: Table of how each type of role is referenced by the role manager. Roles referenced with soft references, are automatically garbage collected if no entities references it.

in Chameleon Development & reflections on role-oriented programming Written by:
by Supervisor Kasper Østerbye, Johannes Beyer, Kasper B. Graversen

Table 1: A small fraction of the automatically ex- tracted content summary for the PubMed collection.

in Extending SDARTS: Extracting Metadata from Web Databases and Interfacing with the Open Archives Initiative
by Panagiotis G. Ipeirotis, Tom Barry, Luis Gravano
"... In PAGE 5: ... The result of this process, which is presented in detail in [12], is a content summary that accurately reflects the contents and size of the web collection. Table1 reports a fraction of the content summary that we automatically generated for the PubMed collection. We can see that high-frequency words like cancer are representa- tive of the topic coverage of PubMed, unlike low-frequency words like basketball.... In PAGE 7: ... We can see that the words thesis and study have much higher frequencies than other words, like cancer, that do not correlate well with the contents of this collec- tion. By comparing this content summary with the one that we extracted from PubMed ( Table1 ), we can see that the word distribution can be used to distinguish between the two collections, which host documents of completely differ- ent type. For example, the word cancer in PubMed has high frequency, while the frequency of the same word in the thesis repository is really low since these theses do not focus on medical issues.... ..."

Table 2: Effectiveness performance of the manual run in comparison with the similar automatic run. All results relate to the GOV2 collection.

in unknown title
by unknown authors
"... In PAGE 5: ... Consequently, manual queries are in general longer than respective automatic queries. Table2 compares the effectiveness of using manual queries with that of using automatic queries. It is interesting to notice that although manual run does not improve MAP, it signi cantly increases R.... ..."

Table 1: Effectiveness performance in the automatic ad-hoc task. All results relate to the GOV2 collection.

in unknown title
by unknown authors
"... In PAGE 4: ... That is, the nal ranking is performed by sorting the documents using impact score as the primary sort key, and proximity score as a secondary sort key. Table1 shows the effectiveness performance of these ad-hoc runs. In terms of effectiveness, all of the runs had similar performance.... ..."

Table 9: Amount of corrections for over segmentation and under segmentation for 4 LSU segmentation methods performed on collection I, based on automatic shot segmentation ^ V .

in
by unknown authors

Table 5 presents the results when all the fields of the document collection were used: the manual keywords and manual summaries in addition to the ASR transcripts and the automatic keywords.

in University of Ottawa’s Contribution to CLEF 2005, the CL-SR Track
by Diana Inkpen, Muath Alzghool, Aminul Islam
"... In PAGE 6: ...2338 0.2251 TDN Weighting scheme: mpc/ntn, Manual fields Table5 .Results of indexing all the fields of the collections: the manual keywords and summaries, in addition to the ASR transcripts.... ..."

Table 2: Examples of collected expression common semi-auto manual

in Collecting evaluative expressions for opinion extraction
by Nozomi Kobayashi, Kentaro Inui, Yuji Matsumoto, Kenji Tateishi, Toshikazu Fukushima 2004
"... In PAGE 5: ... We investigated the overlap between the human ex- tracted and semi-automatically collected expres- sions, finding that the semi-automatic collection covered 45% of manually collected expressions in the car domain and 35% in the game domain. Table2 shows examples, where common in- dicates expressions collected commonly in both ways, and semi-auto and manual are expres- sions collected only by each method. There are several reasons why the coverage of the semi-automatic method is not very high.... ..."
Cited by 7
Next 10 →
Results 1 - 10 of 16,401
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University