Stance detection with bidirectional conditional encoding

Isabelle Augenstein, T. Rocktaeschel, A. Vlachos, K Bontcheva

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

272 Scopus citations

Abstract

Stance detection is the task of classifying the attitude Previous work has assumed that either the target is mentioned in the text or that training data for every target is given. This paper considers the more challenging version of this task, where targets are not always mentioned and no training data is available for the test targets. We experiment with conditional LSTM encoding, which builds a representation of the tweet that is dependent on the target, and demonstrate that it outperforms encoding the tweet and the target independently. Performance is improved further when the conditional model is augmented with bidirectional encoding. We evaluate our approach on the SemEval 2016 Task 6 Twitter Stance Detection corpus achieving performance second best only to a system trained on semi-automatically labelled tweets for the test target. When such weak supervision is added, our approach achieves state-of-the-art results.

Original languageAmerican English
Title of host publicationEMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings
PublisherAssociation for Computational Linguistics (ACL)
Pages876-885
Number of pages10
ISBN (Electronic)9781945626258
DOIs
StatePublished - 2016
Event2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016 - Austin, United States
Duration: Nov 1 2016Nov 5 2016

Publication series

NameEMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings

Conference

Conference2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016
Country/TerritoryUnited States
CityAustin
Period11/1/1611/5/16

Fingerprint

Dive into the research topics of 'Stance detection with bidirectional conditional encoding'. Together they form a unique fingerprint.

Cite this