Do Instruction-tuned Large Language Models Help with Relation Extraction?

Xue Li, Fina Polat, Paul Groth

Research output: Contribution to journalConference articlepeer-review

Abstract

Information extraction and specifically relation extraction are key tasks in knowledge base construction. With in-context learning, Large Language Models (LLMs) often demonstrate impressive generalization on unseen information extraction tasks, even with limited examples. However, when using in-context learning for relation extraction, LLMs are not competitive with fully supervised baselines that employ smaller language models. To address this, we explore the potential of instruction-tuning as a mechanism to improve relation extraction performance while preserving in-context capabilities. Our preliminary results demonstrate that instruction-tuned LLMs have the potential to achieve comparable performance with fully supervised smaller LMs. We instruction-tuned a Dolly-v2-3B model using the parameter-efficient approach LoRA on a challenging silver standard relation extraction dataset comprising 1,079 relations. Results show that the instruction-tuned model can achieve a 28.5 micro-F1 and a 27.3 macro-F1 score under a strict matching evaluation strategy. Additionally, manual evaluation with two evaluators shows an average of 66.5% accuracy with 0.760 inter-agreement. You can find access to code and dataset at https://github.com/INDElab/KGC-LLM.git .

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume3577
StatePublished - 2023
Externally publishedYes
Event1st Workshop on Knowledge Base Construction from Pre-Trained Language Models and the 2nd Challenge on Language Models for Knowledge Base Construction, KBC-LM + LM-KBC 2023 - Athens, Greece
Duration: Nov 6 2023 → …

Fingerprint

Dive into the research topics of 'Do Instruction-tuned Large Language Models Help with Relation Extraction?'. Together they form a unique fingerprint.

Cite this