TY - JOUR
T1 - Knowledge-centric Prompt Composition for Knowledge Base Construction from Pre-trained Language Models
AU - Li, Xue
AU - Hughes, Anthony
AU - Llugiqi, Majlinda
AU - Polat, Fina
AU - Groth, Paul
AU - Ekaputra, Fajar J.
N1 - Publisher Copyright:
© 2023 CEUR-WS. All rights reserved.
PY - 2023
Y1 - 2023
N2 - Pretrained language models (PLMs), exemplified by the GPT family of models, have exhibited remarkable proficiency across a spectrum of natural language processing tasks and have displayed potential for extracting knowledge from within the model itself. While numerous endeavors have delved into this capability through probing or prompting methodologies, the potential for constructing comprehensive knowledge bases from PLMs remains relatively uncharted. The Knowledge Base Construction from Pre-trained Language Model Challenge (LM-KBC) [1] looks to bridge this gap. This paper presents the system implementation from team thames to Track 2 of LM-KBC. Our methodology achieves 67 % F1 score on the test set provided by the organisers outperforming the baseline by over 40 points, which ranked 2nd place for Track 2. It does so through the use of additional prompt context derived from both training data and the constraints and descriptions of the relations.
AB - Pretrained language models (PLMs), exemplified by the GPT family of models, have exhibited remarkable proficiency across a spectrum of natural language processing tasks and have displayed potential for extracting knowledge from within the model itself. While numerous endeavors have delved into this capability through probing or prompting methodologies, the potential for constructing comprehensive knowledge bases from PLMs remains relatively uncharted. The Knowledge Base Construction from Pre-trained Language Model Challenge (LM-KBC) [1] looks to bridge this gap. This paper presents the system implementation from team thames to Track 2 of LM-KBC. Our methodology achieves 67 % F1 score on the test set provided by the organisers outperforming the baseline by over 40 points, which ranked 2nd place for Track 2. It does so through the use of additional prompt context derived from both training data and the constraints and descriptions of the relations.
UR - http://www.scopus.com/inward/record.url?scp=85179551710&partnerID=8YFLogxK
M3 - Artículo de la conferencia
AN - SCOPUS:85179551710
SN - 1613-0073
VL - 3577
JO - CEUR Workshop Proceedings
JF - CEUR Workshop Proceedings
T2 - 1st Workshop on Knowledge Base Construction from Pre-Trained Language Models and the 2nd Challenge on Language Models for Knowledge Base Construction, KBC-LM + LM-KBC 2023
Y2 - 6 November 2023
ER -