A Benchmark for the Detection of Metalinguistic Disagreements between LLMs and Knowledge Graphs

Bradley P. Allen, Paul T. Groth

Research output: Contribution to journalConference articlepeer-review

Abstract

Evaluating large language models (LLMs) for tasks like fact extraction in support of knowledge graph construction frequently involves computing accuracy metrics using a ground truth benchmark based on a knowledge graph (KG). These evaluations assume that errors represent factual disagreements. However, human discourse frequently features metalinguistic disagreement, where agents differ not on facts but on the meaning of the language used to express them. Given the complexity of natural language processing and generation using LLMs, we ask: do metalinguistic disagreements occur between LLMs and KGs? Based on an investigation using the T-REx knowledge alignment dataset, we hypothesize that metalinguistic disagreement does in fact occur between LLMs and KGs, with potential relevance for the practice of knowledge graph engineering. We propose a benchmark for evaluating the detection of factual and metalinguistic disagreements between LLMs and KGs. An initial proof of concept of such a benchmark is available on Github.

Original languageEnglish
JournalCEUR Workshop Proceedings
Volume3953
StatePublished - 2025
Externally publishedYes
Event2024 Harmonising Generative AI and Semantic Web Technologies, HGAIS 2024 - Baltimore, United States
Duration: Nov 13 2024 → …

Keywords

  • fact checking
  • knowledge graphs
  • large language models
  • metalinguistic disagreement

Fingerprint

Dive into the research topics of 'A Benchmark for the Detection of Metalinguistic Disagreements between LLMs and Knowledge Graphs'. Together they form a unique fingerprint.

Cite this