Citations
3729 |
WordNet: An Electronic Lexical Database
- Fellbaum
- 1998
(Show Context)
Citation Context ...pproach capitalizes on our successful models proposed in [31, 32, 29, 37] but we also explore deeper linguistic structure such as dependency trees. Secondly, we use YAGO [16], DBpedia [4] and WordNet =-=[12]-=- to match constituents from QA pairs and use their generalizations in our semantic structures. Following our previous work in [37], we employ word sense disambiguation to match the right entities in Y... |
1310 | Optimizing search engines using clickthrough data
- Joachims
- 2002
(Show Context)
Citation Context ...rbag, we also use a ranking score of our search engine assigned to AP. Learning Models. We used SVM-Light-TK21 to train our models. The toolkit enables the use of structural kernels [24] in SVM-Light =-=[18]-=-. We used default PTK parameters as described in [31] and the polynomial kernel of degree 3 on standard features. Pipeline. We built the entire processing pipeline on top of the UIMA framework.We incl... |
1247 |
Kernel methods for pattern analysis
- Shawe-Taylor, Cristianini
- 2004
(Show Context)
Citation Context ...ngineering approaches based on kernel methods, e.g., [31], have been developed, where syntactic and semantic tree representations of the Q/AP pairs are used in kernel-based L2R algorithms, e.g., SVMs =-=[33]-=-. The role of kernels was to implicitly generate syntactic patterns (i.e., tree fragments) to be used as features in SVMs. However, this approach would not be able to solve the example above since, to... |
264 | Wikify!: linking documents to encyclopedic knowledge
- Mihalcea, Csomai
- 2007
(Show Context)
Citation Context ... on entitylevel by construction; and (ii) there are several so-called wikification algorithms, which find references to Wikipedia pages in plain text, and disambiguate them to correct Wikipedia pages =-=[9, 23]-=-. Thus, we wikify text to both detect about anchors in Tent and extract URLs of the Wikipedia pages they refer to. In order to have a richer disambiguation context, we concatenate Tent with Tgen befor... |
240 | Query chains: Learning to rank from implicit feedback
- Radlinski, Joachims
- 2005
(Show Context)
Citation Context ...mplex. Recent studies on passage reranking, exploiting structural information, were carried out in [19], whereas other methods explored soft matching (i.e., lexical similarity) based on NE types [1]. =-=[28, 17]-=- applied question and answer classifiers for passage reranking. In this context, several approaches focused on reranking the answers to definition/description questions, e.g., [34, 36]. Next, the mode... |
215 | Learning question classifiers.
- Li, Roth
- 2002
(Show Context)
Citation Context ... specific examples. As in [31], we use statistical classifiers to derive focus and categories of the question and of the NEs in the AP. We consider HUM, LOC, ENTY, NUM, ABBR and DESC question classes =-=[20]-=-. Question focus and AP chunks, which contain NEs of type compatible with the question class6, are marked by prepending the above tags to their label. Figure 2 shows an example of such label in the do... |
173 | Overview of the TREC 2003 question answering track
- Voorhees
- 2003
(Show Context)
Citation Context ...arison with TREC challenge. An approximate (as we used five-fold cross-validation) comparison can be attempted with the results from TREC 2003 for the “best passages tasks” described in TREC-overview =-=[39]-=-. Thanks to LD our system achieves an accuracy (Precision@1) of 36.59, which would allow it to be ranked 3rd in the official evaluation, i.e., higher thanMultiText-system (accuracy=35.1) and below the... |
169 | Maltparser: A languageindependent system for data-driven dependency parsing
- Nivre, Hall, et al.
(Show Context)
Citation Context ... Stanford 43.22±2.57 34.37±1.78 34.39±2.91 DT2A Mate 41.31±2.22 33.38±1.30 31.46±3.44 +V+FREL ClearNLP 40.72±1.64 32.05±1.54 30.98±2.71 Malt 41.78±1.67 33.25±2.03 32.68±1.40 ontonotes model) and Malt =-=[25]-=- (v1.7.2, linear model) dependency parsers. Moreover, we used annotators for building new sentence representations starting from tools’ annotations. For example, we generated annotations with shallow ... |
158 | YAGO2: A spatially and temporally enhanced knowledge base from wikipedia
- Hoffart, Suchanek, et al.
(Show Context)
Citation Context ...n the answer passages). This approach capitalizes on our successful models proposed in [31, 32, 29, 37] but we also explore deeper linguistic structure such as dependency trees. Secondly, we use YAGO =-=[16]-=-, DBpedia [4] and WordNet [12] to match constituents from QA pairs and use their generalizations in our semantic structures. Following our previous work in [37], we employ word sense disambiguation to... |
140 | The berlin sparql benchmark
- Bizer, Schultz
(Show Context)
Citation Context ...i (2013)[31] 75.20 68.29 Yih et al. (2013)[45] 77.00 70.92 Wang and Nyberg (2015) [40] 79.13 71.34 CH+FREL (this work) 81.30 72.65 CH+FREL+wikiREL+TMN :Y 81.10 74.07 results reported, for example, in =-=[5]-=-. We store YAGO2 and WordNet+DBpedia data in separate triple stores. In our case, extracting all the generalizations of the URI, i.e. sending a set of SPARQL queries to a triple store, took 18.22, 36.... |
136 |
Welty: Building Watson: An Overview of the DeepQA Project
- Ferrucci, Brown, et al.
- 2010
(Show Context)
Citation Context ...th own the same property, i.e., to be “the eighth wonder of the world". Without this link many incorrect entities may be selected as many entities enjoy the property above2. The solution (provided in =-=[13]-=-) for solving this case is the use of semantic resources: the lexical answer type (LAT) of the ques1This example from TREC QA corpus will be our a running example for the rest of the paper. 2For examp... |
136 | Overview of the TREC 2001 question answering track
- Voorhees
- 2001
(Show Context)
Citation Context ...ODUCTION Previous work has shown that advanced natural language processing can positively impact the accuracy of Question Answering (QA) systems. As shown by the experience in the TREC QA task, e.g., =-=[38]-=-, the selection of the right passage expressing the answer requires to consider the relation between the question and the passage text. In other words, it is not enough measuring the similarity betwee... |
124 | The Stanford CoreNLP natural language processing toolkit.
- Manning, Surdeanu, et al.
- 2014
(Show Context)
Citation Context ...eline. We built the entire processing pipeline on top of the UIMA framework.We included many off-the-shelf NLP tools wrapping them as UIMA annotators. We use the Apache OpenNLP22 and Stanford CoreNLP =-=[21]-=- tools for sentence detection, tokenization, POS-tagging and NE recognition; Illinois chunker [27], Stanford CoreNLP Lemmatizer, and question class and focus classifiers trained as in [31]. For depend... |
111 | Finding similar questions in large question and answer archives
- Jeon, Croft, et al.
- 2005
(Show Context)
Citation Context ...mplex. Recent studies on passage reranking, exploiting structural information, were carried out in [19], whereas other methods explored soft matching (i.e., lexical similarity) based on NE types [1]. =-=[28, 17]-=- applied question and answer classifiers for passage reranking. In this context, several approaches focused on reranking the answers to definition/description questions, e.g., [34, 36]. Next, the mode... |
111 | The use of classifiers in sequential inference.
- Punyakanok, Roth
- 2001
(Show Context)
Citation Context ...he-shelf NLP tools wrapping them as UIMA annotators. We use the Apache OpenNLP22 and Stanford CoreNLP [21] tools for sentence detection, tokenization, POS-tagging and NE recognition; Illinois chunker =-=[27]-=-, Stanford CoreNLP Lemmatizer, and question class and focus classifiers trained as in [31]. For dependency parsing we used Stanford dependency parser (version 2.0.3) and UIMA wrappers provided by the ... |
89 | Efficient convolution kernels for dependency and constituent syntactic trees. In:
- Moschitti
- 2006
(Show Context)
Citation Context ... Thirdly, we apply structural kernels to the above structures by exploiting SVMs for automatically learning classification and ranking functions. In particular, we apply the Partial Tree Kernel (PTK) =-=[24]-=-, which can generate the richest space of tree fragments. Next, we experiment with three different corpora, (i) the standard TREC QA corpus for passage reranking, (ii) a QA benchmark built for testing... |
72 | Using semantic roles to improve question answering.
- Shen, Lapata
- 2007
(Show Context)
Citation Context ...n NE types [1]. [28, 17] applied question and answer classifiers for passage reranking. In this context, several approaches focused on reranking the answers to definition/description questions, e.g., =-=[34, 36]-=-. Next, the models developed in [2, 3] demonstrate that linguistic structures improve QA but the proposed approaches again are based on handcrafted features and rules. In contrast, our method is based... |
66 | 2003. Selectively using relations to improve precision in question answering. Presented at the Natural Language Processing for Question Answering
- Katz, Lin
(Show Context)
Citation Context ... with an obscure fine manual tuning, have made adaption or just replication of such systems rather complex. Recent studies on passage reranking, exploiting structural information, were carried out in =-=[19]-=-, whereas other methods explored soft matching (i.e., lexical similarity) based on NE types [1]. [28, 17] applied question and answer classifiers for passage reranking. In this context, several approa... |
62 | Learning to rank answer on large online QA collections.” in proc. Association for Computational Linguistics,
- Surdeanu, Ciaramita, et al.
- 2008
(Show Context)
Citation Context ...n NE types [1]. [28, 17] applied question and answer classifiers for passage reranking. In this context, several approaches focused on reranking the answers to definition/description questions, e.g., =-=[34, 36]-=-. Next, the models developed in [2, 3] demonstrate that linguistic structures improve QA but the proposed approaches again are based on handcrafted features and rules. In contrast, our method is based... |
50 |
Top accuracy and fast dependency parsing is not a contradiction.
- Bohnet
- 2010
(Show Context)
Citation Context ...nd question class and focus classifiers trained as in [31]. For dependency parsing we used Stanford dependency parser (version 2.0.3) and UIMA wrappers provided by the DKPro toolset [10] for the Mate =-=[6]-=- (v3.5), ClearNLP [7] (v2.0.2, 14http://trec.nist.gov/data/qamain.html 15https://catalog.ldc.upenn.edu/LDC2002T31 16We downloaded the distribution made available by [44] in https://code.google.com/p/j... |
49 | An open-source toolkit for mining Wikipedia
- Milne, Witten
(Show Context)
Citation Context ... on entitylevel by construction; and (ii) there are several so-called wikification algorithms, which find references to Wikipedia pages in plain text, and disambiguate them to correct Wikipedia pages =-=[9, 23]-=-. Thus, we wikify text to both detect about anchors in Tent and extract URLs of the Wikipedia pages they refer to. In order to have a richer disambiguation context, we concatenate Tent with Tgen befor... |
47 | What is the Jeopardy model? A quasi-synchronous grammar for QA.
- Wang, Smith, et al.
- 2007
(Show Context)
Citation Context ... the richest space of tree fragments. Next, we experiment with three different corpora, (i) the standard TREC QA corpus for passage reranking, (ii) a QA benchmark built for testing sentence reranking =-=[42]-=-, and (iii) a community QA dataset based on Answerbag4. We tested several models combining (i) traditional feature vectors, (ii) automatic semantic labels derived by statistical classifiers, e.g., que... |
43 |
DBpedia-A crystallization point for the Web of Data. Web Semantics: Science, Services and Agents on the World Wide Web,
- Bizer, Lehmann, et al.
- 2009
(Show Context)
Citation Context ...assages). This approach capitalizes on our successful models proposed in [31, 32, 29, 37] but we also explore deeper linguistic structure such as dependency trees. Secondly, we use YAGO [16], DBpedia =-=[4]-=- and WordNet [12] to match constituents from QA pairs and use their generalizations in our semantic structures. Following our previous work in [37], we employ word sense disambiguation to match the ri... |
41 |
High-Performance, Open-Domain Question Answering from Large Text Collections.
- Pasca
- 2001
(Show Context)
Citation Context ...s comparably to DT1+V+FREL and is outperformed by the other dependency structures. In our intuition, this new outcome may be due to the performance of the dependency parser employed for preprocessing =-=[26]-=-. This intuition motivated us to evaluate the impact of four different parsers for building DT3Q+DT2A. The results in Table 2 show that ClearNLP parser is outperformed by Mate, Malt and Stanford, wher... |
40 | Tree edit models for recognizing textual entailments, paraphrases, and answers to questions.
- Heilman, Smith
- 2010
(Show Context)
Citation Context ...spired by machine translation, which allows to model Q/AP relations by means of syntactic transformations. [41] designed a probabilistic model to learn tree-edit operations on dependency parse trees. =-=[14]-=- employed a computationally expensive tree kernel-based heuristic to identify tree edit sequences which could serve as good features for a logistic regression model. [43] further improved the [14] app... |
26 | Structured lexical similarity via convolution kernels on dependency trees. In:
- Croce, Moschitti, et al.
- 2011
(Show Context)
Citation Context ...j) or the “possession 5http://uima.apache.org/ modifier”(poss) relations are grouped under the same chunk node. Figure 4 provides an example of a DT2 structure. Lexical-centered dependency tree (DT3) =-=[8]-=-. Finally, we engineer DT3 in which dependency relations rel(head,child) are represented by a parent and a child node labeled head::pos and child::pos, respectively (lemmas are specialized with the fi... |
22 | Getting the Most out of Transition-Based Dependency Parsing
- Choi, Palmer
- 2012
(Show Context)
Citation Context ... focus classifiers trained as in [31]. For dependency parsing we used Stanford dependency parser (version 2.0.3) and UIMA wrappers provided by the DKPro toolset [10] for the Mate [6] (v3.5), ClearNLP =-=[7]-=- (v2.0.2, 14http://trec.nist.gov/data/qamain.html 15https://catalog.ldc.upenn.edu/LDC2002T31 16We downloaded the distribution made available by [44] in https://code.google.com/p/jacana/. 17In order to... |
19 | Probabilistic treeedit models with structured latent variables for textual entailment and question answering.
- Wang, Manning
- 2010
(Show Context)
Citation Context ...arding answer sentence rerankers, [42] proposed a probabilistic quasi-synchronous grammar, inspired by machine translation, which allows to model Q/AP relations by means of syntactic transformations. =-=[41]-=- designed a probabilistic model to learn tree-edit operations on dependency parse trees. [14] employed a computationally expensive tree kernel-based heuristic to identify tree edit sequences which cou... |
17 | Question answering with LCC chaucer at trec 2006
- Hickl, Williams, et al.
- 2006
(Show Context)
Citation Context ... structural model. In contrast, we show that our LD approach can effectively encode knowledge improving on passage reranking. More traditional work in QA using semantics and syntax can be observed in =-=[15, 35]-=-. However, the complexity of the method in [15] along with an obscure fine manual tuning, have made adaption or just replication of such systems rather complex. Recent studies on passage reranking, ex... |
16 | HITIQA: Towards Analytical Question Answering, in:
- Small, Strzalkowski, et al.
- 2004
(Show Context)
Citation Context ... structural model. In contrast, we show that our LD approach can effectively encode knowledge improving on passage reranking. More traditional work in QA using semantics and syntax can be observed in =-=[15, 35]-=-. However, the complexity of the method in [15] along with an obscure fine manual tuning, have made adaption or just replication of such systems rather complex. Recent studies on passage reranking, ex... |
14 | Answer extraction as sequence tagging with tree edit distance.
- Yao, Durme, et al.
- 2013
(Show Context)
Citation Context ...DKPro toolset [10] for the Mate [6] (v3.5), ClearNLP [7] (v2.0.2, 14http://trec.nist.gov/data/qamain.html 15https://catalog.ldc.upenn.edu/LDC2002T31 16We downloaded the distribution made available by =-=[44]-=- in https://code.google.com/p/jacana/. 17In order to obtain results comparable to those in the previous works experimeting on TREC13 [45, 40, 42], we used the same evaluation setting as described in f... |
13 | Rank learning for factoid question answering with linguistic and semantic constraints.
- Bilotti, Elsas, et al.
- 2010
(Show Context)
Citation Context ...n and answer classifiers for passage reranking. In this context, several approaches focused on reranking the answers to definition/description questions, e.g., [34, 36]. Next, the models developed in =-=[2, 3]-=- demonstrate that linguistic structures improve QA but the proposed approaches again are based on handcrafted features and rules. In contrast, our method is based on automatic feature engineering, res... |
9 | Learning adaptable patterns for passage reranking.
- Severyn, Nicosia, et al.
- 2013
(Show Context)
Citation Context ...NTY tags in it. 5. SEMANTIC MATCH USING LD Encoding relational information between Q and AP, i.e., links between words or constituents, is essential for improving passage reranking. Our previous work =-=[32, 31, 29]-=- has only defined two 6Compatibility is checked by means of a predefined compatibility table. We use the following mappings: Person, Organization! HUM ,ENTY; Misc ! ENTY; Location !LOC; Date, Time, Mo... |
8 | Passage reranking for question answering using syntactic structures and answer types
- Aktolga, Allan, et al.
- 2011
(Show Context)
Citation Context ...er complex. Recent studies on passage reranking, exploiting structural information, were carried out in [19], whereas other methods explored soft matching (i.e., lexical similarity) based on NE types =-=[1]-=-. [28, 17] applied question and answer classifiers for passage reranking. In this context, several approaches focused on reranking the answers to definition/description questions, e.g., [34, 36]. Next... |
8 | Automatic feature engineering for answer selection and extraction.
- Severyn, Moschitti
- 2013
(Show Context)
Citation Context ...hallow syntactic trees connected with relational nodes (i.e., those matching the same words in the question and in the answer passages). This approach capitalizes on our successful models proposed in =-=[31, 32, 29, 37]-=- but we also explore deeper linguistic structure such as dependency trees. Secondly, we use YAGO [16], DBpedia [4] and WordNet [12] to match constituents from QA pairs and use their generalizations in... |
8 | Question answering using enhanced lexical semantic models.
- Yih, Chang, et al.
- 2013
(Show Context)
Citation Context ...logistic regression model. [43] further improved the [14] approach by proposing a faster dynamic-programming based algorithm for feature extraction and extending the feature set withWordNet features. =-=[45]-=- proposed a model which, in addition to syntax, incorporates features based on rich lexical semantic knowledge, including synonymy, antonymy, hypernymy and semantic similarity, obtained from a number ... |
6 | Prismatic: Inducing knowledge from a large scale lexicalized relation resource.
- Fan, Ferrucci, et al.
- 2010
(Show Context)
Citation Context ...in g Wikipedia LD datasets Figure 1: Kernel-based pair classification/reranking framework structure (PAS) builder [22]. Additionally, it uses several semantic resources, e.g., Wikipedia and PRISMATIC =-=[11]-=- combined in a machine learning-based reranker. The Watson system is very accurate and effective but it requires to hand-craft rules, which is typically very costly. Our approach instead can automatic... |
5 |
I.: A broad-coverage collection of portable nlp components for building shareable analysis pipelines.
- Castilho, Gurevych
- 2014
(Show Context)
Citation Context ...eNLP Lemmatizer, and question class and focus classifiers trained as in [31]. For dependency parsing we used Stanford dependency parser (version 2.0.3) and UIMA wrappers provided by the DKPro toolset =-=[10]-=- for the Mate [6] (v3.5), ClearNLP [7] (v2.0.2, 14http://trec.nist.gov/data/qamain.html 15https://catalog.ldc.upenn.edu/LDC2002T31 16We downloaded the distribution made available by [44] in https://co... |
5 |
BDeep parsing in Watson,[
- McCord, Murdock, et al.
- 2012
(Show Context)
Citation Context ...atorLD type annotator Pair repository predictions Candidate answers p re p ro c e s s in g Wikipedia LD datasets Figure 1: Kernel-based pair classification/reranking framework structure (PAS) builder =-=[22]-=-. Additionally, it uses several semantic resources, e.g., Wikipedia and PRISMATIC [11] combined in a machine learning-based reranker. The Watson system is very accurate and effective but it requires t... |
4 | Improving text retrieval precision and answer accuracy in question answering systems
- Bilotti, Nyberg
- 2008
(Show Context)
Citation Context ...n and answer classifiers for passage reranking. In this context, several approaches focused on reranking the answers to definition/description questions, e.g., [34, 36]. Next, the models developed in =-=[2, 3]-=- demonstrate that linguistic structures improve QA but the proposed approaches again are based on handcrafted features and rules. In contrast, our method is based on automatic feature engineering, res... |
4 | Building structures from classifiers for passage reranking.
- Severyn, Nicosia, et al.
- 2013
(Show Context)
Citation Context .... Since manually selecting properties, i.e., generating rules for any pairs of Q and AP is rather difficult or even impossible, automatic feature engineering approaches based on kernel methods, e.g., =-=[31]-=-, have been developed, where syntactic and semantic tree representations of the Q/AP pairs are used in kernel-based L2R algorithms, e.g., SVMs [33]. The role of kernels was to implicitly generate synt... |
3 |
A long short-term memory model for answer sentence selection in question answering.
- Wang, Nyberg
- 2015
(Show Context)
Citation Context ... running example patterns. More recently, the answer selection task was tackled with deep learning, e.g. convolutional deep neural networks [30] and stacked bidirectional Long Short-Term Memory model =-=[40]-=-. 3. PASSAGE RERANKING FRAMEWORK Our framework uses the relations between a question (Q) and its answer passage (AP) to rerank passages. The basic schema is displayed in Figure 1: given a Q we submit ... |
2 | Learning to rank short text pairs with convolutional deep neural networks.
- Severyn, Moschitti
- 2015
(Show Context)
Citation Context ...ure 2: Shallow chunk-based tree (CH) for the Q and AP of the running example patterns. More recently, the answer selection task was tackled with deep learning, e.g. convolutional deep neural networks =-=[30]-=- and stacked bidirectional Long Short-Term Memory model [40]. 3. PASSAGE RERANKING FRAMEWORK Our framework uses the relations between a question (Q) and its answer passage (AP) to rerank passages. The... |
2 | Encoding semantic resources in syntactic structures for passage reranking
- Tymoshenko, Moschitti, et al.
- 2014
(Show Context)
Citation Context ...hallow syntactic trees connected with relational nodes (i.e., those matching the same words in the question and in the answer passages). This approach capitalizes on our successful models proposed in =-=[31, 32, 29, 37]-=- but we also explore deeper linguistic structure such as dependency trees. Secondly, we use YAGO [16], DBpedia [4] and WordNet [12] to match constituents from QA pairs and use their generalizations in... |