• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 65,508
Next 10 →

Table 2: Levels of language models for document retrieval Object Base model Residual Full model

in Parsimonious Language Models for Information Retrieval
by Djoerd Hiemstra, Stephen Robertson, Hugo Zaragoza
"... In PAGE 4: ...4 Model Summary We have identified mixture models that can be used at three stages in the retrieval process to infer parsimonious language models. Table2 shows the models used in these three stages and the relation between them [?]. In each case in this table the base model is externally defined (in- dependent of the object under consideration).... In PAGE 4: ... In initial query formulation, we fit the RM(GM) to the only information that we have about the request, namely its original text. In [?], it was assumed that the search process would involve trying out the level 3 model on every document (shown as 3a in Table2 ), against a null hypothesis which would be the level 2 model derived at indexing time. This comparison would again have to appeal to parsimony.... ..."

Table 2: Levels of language models for document retrieval Object Base model Residual Full model

in General Terms
by Djoerd Hiemstra
"... In PAGE 4: ...4 Model Summary We have identified mixture models that can be used at three stages in the retrieval process to infer parsimonious language models. Table2 shows the models used in these three stages and the relation between them [24]. In each case in this table the base model is externally defined (in- dependent of the object under consideration).... In PAGE 4: ... In initial query formulation, we fit the RM(GM) to the only information that we have about the request, namely its original text. In [24], it was assumed that the search process would involve trying out the level 3 model on every document (shown as 3a in Table2 ), against a null hypothesis which would be the level 2 model derived at indexing time. This comparison would again have to appeal to parsimony.... ..."

Table 2. Performance comparison of the term- and document-based partitioning

in Effect of Inverted Index Partitioning Schemes on Performance of Query Processing in Parallel Text Retrieval Systems ⋆
by B. Barla Cambazoglu, Cevdet Aykanat
"... In PAGE 6: ... 6.2 Results Table2 displays the storage imbalance in terms of the number of postings and inverted lists for the two partitioning schemes with varying number of index servers, K =2, 8, 32. This table also shows the total number of disk accesses,... ..."

Table 5. Comparison of the relevance models (RM) and the LDA-based document models (LBDM). The evaluation measure is average precision. %diff indicates the percentage change of LBDM over RM.

in and Retrieval – Retrieval models.
by Xing Wei, W. Bruce Croft
"... In PAGE 7: ...3.2 Comparison and Combination with Relevance Models In Table5 we compare the retrieval results of the LBDM with the relevance model (RM), which incorporates pseudo-feedback information and is known for excellent performance (Lavrenko and Croft, 2001). On some collections, the results of the two models are quite close.... In PAGE 7: ... Although we tune parameters on the AP collection, parameter adjustment for the other collections does not improve the performance much. Compared to the relevance model results in Table5 , we conjecture that it is due to the characteristics of the documents and queries that the improvement on the AP collection is larger than on the other collections. We can also combine the relevance model and LBDM to do retrieval.... In PAGE 7: ... More importantly, unlike the Relevance Model, LDA estimation is done offline and only needs to be done once. Therefore LDA-based retrieval can potentially be used in applications where pseudo-relevance feedback would not be 3 The QL amp;RM baseline in Table 6 is slightly different with Table 5 because in the experiments of Table5 , in order to compare with the results in Liu and Croft (2004), we directly load their index into our system and then run the experiments on their ... ..."

Table 1: MAP and precision at 10 for the runs, the improvement is relative to the document-based run

in Multiple Sources of Evidence for XML Retrieval
by Börkur Sigurbjörnsson, Jaap Kamps, Maarten de Rijke
"... In PAGE 2: ...Table 1: MAP and precision at 10 for the runs, the improvement is relative to the document-based run The first three rows of Table1 show the results of three runs, each based on a different source of evidence for rele- vancy. The document-based run is quite competitive, but the element-based and the environment-based runs give sig- nificantly better results.... In PAGE 2: ... We simply rank elements by adding up the scores from each run. The result is shown in the last row of Table1 . The mixed run performs between 3.... ..."

Tablenobreakspace4: Our parameter setting for the Okapi and Prosit models For the Asian languages, we indexed the documents based on the overlapping bigram approach. In this case, the sequence ABCD EFGH would generate the following bigrams { AB , BC , CD , EF , FG and GH }. In our work, we generated these overlapping bigrams4 only when using Asian characters. In this study, spaces and other punctuation marks (collected for each language in their respective encoding) were used to stop the bigram generation. Moreover, if we found terms written with ASCII characters (usually numbers or acronyms such as WTO in

in Multilingual Information Retrieval with Asian Languages
by Jacques Savoy

Table 4: Returned documents based on different numbers of LSI factors.

in Computational Methods for Intelligent Information Access
by Michael W. Berry, Susan T. Dumais, Todd A. Letsche 1995
"... In PAGE 14: ... This ability to retrieve relevant information based on context or meaning rather than literal term usage is the main motivation for using LSI. Table4 lists the LSI-ranked documents (medical topics) with dif- ferent numbers of factors (k). The documents returned in Table 4 satisfy a cosine threshold of :40, i.... In PAGE 15: ...orderingonly as Table4 clearly demonstratesthat its value associated with returned documents can significantly vary with changes in the number of factors k. Table 4: Returned documents based on different numbers of LSI factors.... ..."
Cited by 39

Table 3: Document-based vs. page-based vs. section-based retrieval (precision at xed num- ber of documents returned)

in Efficient Retrieval of Partial Documents
by Justin Zobel, Alistair Moffat, Ross Wilkinson, Ron Sacks-davis 1995
"... In PAGE 18: ... We ran this experiment for several minimum page sizes, ranging from 250 bytes to 4,000 bytes. Results are shown in Table3 . The second column shows the number of indexed fragments; while the subsequent columns show the precision at the indicated volume of answers.... In PAGE 18: ... The second column shows the number of indexed fragments; while the subsequent columns show the precision at the indicated volume of answers. Because the number of known relevant documents for these queries is as little as one, it is impossible to achieve 100% precision even at k = 5; and the nal row of Table3 shows the maximum average precision value that could be achieved for each value of k. The entries in the table should be considered relative to this maximum.... ..."
Cited by 34

Table 1: Results for the baseline runs. The improve- ment of the element-based run is calculated relative to the document-based run.

in Processing Content-and-Structure Queries for XML Retrieval
by Börkur Sigurbjörnsson, Jaap Kamps, Maarten de Rijke

Table 2 shows the performance of our four runs: large document (CMUfeed) vs. small document (CMUentry) retrieval models and the Wikipedia (*W) expansion model. The large document model clearly outperformed the small document model, and Wikipedia-based expan- sion improved average performance of all runs. Figure 2 shows our best run (CMUfeedW) com- pared to the per-query best and median average precision values.

in Retrieval and feedback models for blog distillation
by Jonathan Elsas, Jaime Arguello, Jamie Callan, Jaime Carbonell
"... In PAGE 5: ... Table2 : Performance of our 4 runs Retrieval performance was superior with Wikipedia-based query expansion than with- out. Adding Wikipedia-based expansion im- proved performance in 30/45 queries under the small document model and 34/45 queries under the large document model.... ..."
Next 10 →
Results 1 - 10 of 65,508
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University