Knowledge-centric Prompt Composition for Knowledge Base Construction from Pre-trained Language Models

Xue Li, Anthony Hughes, Majlinda Llugiqi, Fina Polat, Paul Groth, Fajar J. Ekaputra

Research output: Contribution to journalConference articlepeer-review

Abstract

Pretrained language models (PLMs), exemplified by the GPT family of models, have exhibited remarkable proficiency across a spectrum of natural language processing tasks and have displayed potential for extracting knowledge from within the model itself. While numerous endeavors have delved into this capability through probing or prompting methodologies, the potential for constructing comprehensive knowledge bases from PLMs remains relatively uncharted. The Knowledge Base Construction from Pre-trained Language Model Challenge (LM-KBC) [1] looks to bridge this gap. This paper presents the system implementation from team thames to Track 2 of LM-KBC. Our methodology achieves 67 % F1 score on the test set provided by the organisers outperforming the baseline by over 40 points, which ranked 2nd place for Track 2. It does so through the use of additional prompt context derived from both training data and the constraints and descriptions of the relations.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume3577
StatePublished - 2023
Externally publishedYes
Event1st Workshop on Knowledge Base Construction from Pre-Trained Language Models and the 2nd Challenge on Language Models for Knowledge Base Construction, KBC-LM + LM-KBC 2023 - Athens, Greece
Duration: Nov 6 2023 → …

Fingerprint

Dive into the research topics of 'Knowledge-centric Prompt Composition for Knowledge Base Construction from Pre-trained Language Models'. Together they form a unique fingerprint.

Cite this