• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 11 - 20 of 604
Next 10 →

THUIR at TREC 2005: Enterprise Track

by Yupeng Fu Wei, Wei Yu, Yize Li, Yiqun Liu, Min Zhang, Shaoping Ma , 2005
"... IR group of Tsinghua University participated in the expert finding task of TREC2005 enterprise track this year. We developed a novel method which is called document reorganization to solve the problem of locating expert for certain query topics. This method collects and combines related information ..."
Abstract - Add to MetaCart
IR group of Tsinghua University participated in the expert finding task of TREC2005 enterprise track this year. We developed a novel method which is called document reorganization to solve the problem of locating expert for certain query topics. This method collects and combines related information

IBM in TREC2006 Enterprise Track

by Jennifer Chu-carroll, Guillermo Averboch, Pablo Duboue, David Gondek, J William, Murdock John Prager, Paul Hoffmann, Janyce Wiebe
"... In 2006, IBM participated for the first time in the Enterprise Track, submitting runs for both the discussion and expert tasks. The Enterprise Track is intended to address information seeking tasks common in corporate settings using information that is readily available on corporate intranets. Becau ..."
Abstract - Add to MetaCart
In 2006, IBM participated for the first time in the Enterprise Track, submitting runs for both the discussion and expert tasks. The Enterprise Track is intended to address information seeking tasks common in corporate settings using information that is readily available on corporate intranets

The TREC-5 Filtering Track

by David D. Lewis - The Fifth Text REtrieval Conference (TREC-5 , 1997
"... The TREC-5 filtering track, an evaluation of binary text classification systems, was a repeat of the filtering evaluation run in a trial version for TREC-4, with only the data set and participants changing. Seven sites took part, submitting a total of ten runs. We review the nature of the task, the ..."
Abstract - Cited by 41 (0 self) - Add to MetaCart
The TREC-5 filtering track, an evaluation of binary text classification systems, was a repeat of the filtering evaluation run in a trial version for TREC-4, with only the data set and participants changing. Seven sites took part, submitting a total of ten runs. We review the nature of the task

UMass at TREC 2006: Enterprise Track

by Desislava Petkova, W. Bruce Croft
"... This paper gives an overview of the work done at the ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
This paper gives an overview of the work done at the

TREC Genomics Track Overview

by William Hersh, Ravi Teja Bhupatiraju , 2003
"... The first year of TREC Genomics Track featured two tasks: ad hoc retrieval and information extraction. Both tasks centered around the Gene Reference into Function (GeneRIF) resource of the National Library of Medicine, which was used as both pseudorelevance judgments for ad hoc document retrieval as ..."
Abstract - Cited by 43 (1 self) - Add to MetaCart
with the growth of new information needs (e.g., question-answering, cross-lingual), data types (e.g., video) and platforms (e.g., the Web). This paper describes the events leading up to the first year of TREC Genomics Track, the first year’s results, and future directions for subsequent years. Genomics

Overview of the TREC 2003 Novelty Track

by Ian Soboroff, Donna Harman - TEXT RETRIEVAL CONFERENCE , 2003
"... The novelty track was first introduced in TREC 2002.Given a TREC topic and an ordered list of documents, systems must find the relevant and novelsentences that should be returned to the user from this set. This task integrates aspects of passage re-trieval and information filtering. This year, rathe ..."
Abstract - Cited by 57 (0 self) - Add to MetaCart
The novelty track was first introduced in TREC 2002.Given a TREC topic and an ordered list of documents, systems must find the relevant and novelsentences that should be returned to the user from this set. This task integrates aspects of passage re-trieval and information filtering. This year

Microsoft Cambridge at TREC-14: Enterprise Track

by Nick Craswell, Hugo Zaragoza, Stephen Robertson - In Voorhees and Buckland [9
"... mbination and 3) The tuning and ranking framework we used this year. To calculate BM25F [4] we first calculate a normalised term frequency for each field: x d,f,t := x d,f,t (1 +B f ( l d,f l f - 1)) # {SUBJECT, BODY, QUOTED} indicates the field type, x d,f,t is the term frequency of te ..."
Abstract - Cited by 18 (1 self) - Add to MetaCart
mbination and 3) The tuning and ranking framework we used this year. To calculate BM25F [4] we first calculate a normalised term frequency for each field: x d,f,t := x d,f,t (1 +B f ( l d,f l f - 1)) # {SUBJECT, BODY, QUOTED} indicates the field type, x d,f,t is the term frequency of term t in the field type f of document d, l d,f is the length of that field, and l f is the average field length for that field type. B f is a field-dependant parameter similar to the B parameter in BM25. In particular, if B f = 0 there is no normalisation and if B f = 1 the frequency is completely normalised w.r.t. the average field length. These term frequencies can then be combined in a linearly weighted sum to obtain the final term pseudo-frequency: x d,t = W f x d,f,t (2) with weight parameters W f . This is then used in the usual BM25 saturating function. This leads the following ranking function, which we refer to as BM25F: BM25F (d) := t#q#d x d,t K 1 + x d,t t (3)

The TREC-2002 Video Track Report

by Alan F. Smeaton, Paul Over , 2002
"... This paper is an introduction to the track framework -- the tasks, data, and measures -- and the approaches taken. An overview of results will be presented in the plenary session. For detailed information about approaches and results see the various site reports and the back-of-the-notebook results ..."
Abstract - Cited by 45 (5 self) - Add to MetaCart
This paper is an introduction to the track framework -- the tasks, data, and measures -- and the approaches taken. An overview of results will be presented in the plenary session. For detailed information about approaches and results see the various site reports and the back-of-the-notebook results

Pitt at TREC 2005: HARD and Enterprise

by Daqing He Jae-Wook, Daqing He, Jae-wook Ahn
"... The University of Pittsburgh team participated in two tracks for TREC 2005: the High Accuracy Retrieval from Documents (HARD) track and the Enterprise Retrieval track. ..."
Abstract - Add to MetaCart
The University of Pittsburgh team participated in two tracks for TREC 2005: the High Accuracy Retrieval from Documents (HARD) track and the Enterprise Retrieval track.

TREC 2005 genomics track overview

by William Hersh, Aaron Cohen, Jianji Yang, Ravi Teja Bhupatiraju, Phoebe Roberts, Marti Hearst - In TREC 2005 notebook , 2005
"... The TREC 2005 Genomics Track featured two tasks, an ad hoc retrieval task and four subtasks in text categorization. The ad hoc retrieval task utilized a 10-year, 4.5-million document subset of the MEDLINE bibliographic database, with 50 topics conforming to five generic topic types. The categorizati ..."
Abstract - Cited by 41 (0 self) - Add to MetaCart
The TREC 2005 Genomics Track featured two tasks, an ad hoc retrieval task and four subtasks in text categorization. The ad hoc retrieval task utilized a 10-year, 4.5-million document subset of the MEDLINE bibliographic database, with 50 topics conforming to five generic topic types
Next 10 →
Results 11 - 20 of 604
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University