Transformer Based Approaches to Named Entity Recognition (NER) and Relationship Extraction (RE)

Research output: Contribution to conferencePaperpeer-review

Abstract

Named Entity Recognition (NER) and Relationship Extraction (RE) are foundational for many downstream NLP tasks such as Information Retrieval and Knowledge Base construction. While pre-trained models exist for both NER and RE tasks, they are usually specialized for some narrow application domain. If your application domain is different, your best bet is to train your own models. However, the costs associated with training, specifically generating training data, can be a significant deterrent for doing so. Fortunately, Language Models learned by pre-trained Transformers learn a lot about the language of the domain it is trained and fine-tuned on, and therefore NER and RE models based on these Language Models require fewer training examples to deliver the same level of performance. In this workshop, participants will learn about, train, and evaluate Transformer based neural models for NER and RE.
Original languageAmerican English
StatePublished - Apr 19 2021

Fingerprint

Dive into the research topics of 'Transformer Based Approaches to Named Entity Recognition (NER) and Relationship Extraction (RE)'. Together they form a unique fingerprint.

Cite this