Results 11 - 20
of
13,391
Table 3: Distribution of the document features in the 100 entry annotated portion of the corpus. Starred entries denote metadata fields.
2002
"... In PAGE 5: ... We randomly picked 100 of the 2000 entries to annotate using this scheme. Table3 shows the expanded, 24 docu- ment feature set used in the markup. 6.... In PAGE 5: ... 6. Corpus attributes Table3 also lists distributional features of the tagged document features in the 100 annotated entries. The first column shows the number of times that the annotated fea- ture was used to mark information in the entries.... In PAGE 5: ... Short entries tended not to have any detail sen- tences, but as we examined entries of longer length, mostly details were being added. The data validates both prescriptive guidelines and our earlier work in showing that metadata fields (marked with stars in Table3 ) are important for summaries. Audience information, recommended by four of the five prescriptive guidelines, were shown to appear 12% of the time.... ..."
Cited by 3
Table 3: Settings of the annotation expansion based runs. An X indicates that the corresponding technique is used. ABQE is for annotation-based query expansion and ABDE is for annotation-based document expansion. The ranking position is shown, a total of 474 runs were submitted for the IAPR-TC 12 Benchmark 2007 Collection.
"... In PAGE 8: ... In these runs document and query expansion was combined with the other techniques proposed in previous sections. The descriptions of the annotation based expansion (ABE) runs submitted to ImageCLEF2007 are shown in Table3 , together with their overall ranking position. Results with ABE are mixed.... ..."
Table 4, summary of results of expert evaluation, first 10 results for 10 query. The annotator called an document Disputable when it contained content about the gene but less about function or also about (many) other genes
Table 3 shows the number of documents viewed per query by popularity and a n- notations. Users can distinguish popular documents among group users with higher group traffic by looking at the background colors of the human-figure icons and they can also distinguish positively annotated documents by looking at the colors of the thermometer icons. They selected and viewed about 1.3 times more documents when they retrieved results which include documents other group members had viewed before them. For positive social annotation, they selected and viewed 2.4 times more documents among the retrieved results than they retrieved results without positive annotations. To summarize, users tried more items among their retrieved set when they saw higher traffic items or positively annotated item
"... In PAGE 9: ... Table3 . Number of documents viewed per each query split by presence of traffic and annotation social cues The data above shows that the retrieved documents with social navigational infor- mation were popular among the users.... ..."
Table 3 shows the number of documents viewed per query by popularity and a n- notations. Users can distinguish popular documents among group users with higher group traffic by looking at the background colors of the human-figure icons and they can also distinguish positively annotated documents by looking at the colors of the thermometer icons. They selected and viewed about 1.3 times more documents when they retrieved results which include documents other group members had viewed before them. For positive social annotation, they selected and viewed 2.4 times more documents among the retrieved results than they retrieved results without positive annotations. To summarize, users tried more items among their retrieved set when they saw higher traffic items or positively annotated item
in Preface
"... In PAGE 15: ... Table3 . Number of documents viewed per each query split by presence of traffic and annotation social cues The data above shows that the retrieved documents with social navigational infor- mation were popular among the users.... ..."
Table 1. Number of documents, event pairs and types of temporal relations in the annotated corpus.
"... In PAGE 7: ...85. Table1 shows statistics for this corpus, including the distribution of relation types. For about 40% of the event pairs, the first event was annotated as being before the second event, for another 40%, the first event overlapped with the second event, and for the remaining 20%, the first event was after the second.... ..."
Table 8. Agreements of annotations
"... In PAGE 3: ...86% Table 7. Agreement of annotators at document level Agreements of data from news and blogs are listed in Table8 for comparison. Source NTCIR BLOG ... In PAGE 3: ...Table 8. Agreements of annotations Table8 shows that annotations of news articles have lower agreement rates than annotations of web blogs. This is because blog articles may use simpler words and are easier to understand by human annotators than news articles.... ..."
Table 2.1: Examples of HTML document annotation systems. The systems are grouped by their implementation approach: server-based, proxy-based or extensions of a browser. For each system the table shows: whether it is a research prototype (R) or commercial system (C), whether the system is available, where annotations can be located, features to support discussion, notification mechanisms, and how the system handles changes to documents that are annotated. Annotations can be located at specific tags in the document (Tags), predefined positions (Predefined), on sections (Section) or anywhere (Any). The systems handle document changes either by ignoring them (Ignore), attempting to reposition the annotations (Attempt), pushing the responsibility to the user (Explicit), or the information is not available (Unknown).
2002
Cited by 1
Table 1. Annotation Tools
2004
"... In PAGE 3: ... to documents) can be categorized according to the media types which can be annotated (text, web pages, images, audio or video, 3D) and the extent of collaboration supported. This matrix in Table1 , gives an overview of the different products, tools, systems and projects according to these categories.... ..."
Cited by 3
Table 1. Annotation Tools
"... In PAGE 55: ... to documents) can be categorized according to the media types which can be annotated (text, web pages, images, audio or video, 3D) and the extent of collaboration supported. This matrix in Table1 , gives an overview of the different products, tools, systems and projects according to these categories.... ..."
Results 11 - 20
of
13,391