Results 1 -
2 of
2
CLaC Labs Processing Modality and Negation Working Notes for QA4MRE Pilot Task
"... Abstract. For the QA4MRE 2012 Pilot Task on Negation and Modality, CLaC Labs implemented a general, lightweight negation and modality module based on linguistic rules. The strong results confirm the suitability of linguistic heuristics for low-level semantic features and showcase their robustness ac ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. For the QA4MRE 2012 Pilot Task on Negation and Modality, CLaC Labs implemented a general, lightweight negation and modality module based on linguistic rules. The strong results confirm the suitability of linguistic heuristics for low-level semantic features and showcase their robustness across the different subgenres of the QA4MRE corpora.
unknown title
"... CLaC Labs participated in two shared tasks for SemEval2015, Task 10 (subtasks B and E) and Task 11. The underlying system con-figuration is nearly identical and consists of two major components: a large Twitter lex-icon compiled from tweets that carry cer-tain selected hashtags (assumed to guaran-te ..."
Abstract
- Add to MetaCart
(Show Context)
CLaC Labs participated in two shared tasks for SemEval2015, Task 10 (subtasks B and E) and Task 11. The underlying system con-figuration is nearly identical and consists of two major components: a large Twitter lex-icon compiled from tweets that carry cer-tain selected hashtags (assumed to guaran-tee a sentiment polarity) and then inducing that same polarity for the words that occur in the tweets. We also use standard senti-ment lexica and combine the results. The lex-ical sentiment features are further differenti-ated according to some linguistic contexts in which their triggers occur, including bigrams, negation, modality, and dependency triples. We studied feature combinations comprehen-sively for their interoperability and effective-ness on different datasets using the exhaustive feature combination technique of (Shareghi and Bergler, 2013a; Shareghi and Bergler, 2013b). For Subtask 10B we used a SVM, and a decision tree regressor for Task 11. The re-sulting systems ranked ninth for Subtask 10B, fourth for Subtask 10E, and first for Task 11. 1