Data-driven classification of the certainty of scholarly assertions

Mario Prieto, Helena Deus, Anita de Waard, Erik Schultes, Beatriz García-Jiménez, Mark D Wilkinson

Research output: Contribution to journalArticlepeer-review

Abstract

The grammatical structures scholars use to express their assertions are intended to convey various degrees of certainty or speculation. Prior studies have suggested a variety of categorization systems for scholarly certainty; however, these have not been objectively tested for their validity, particularly with respect to representing the interpretation by the reader, rather than the intention of the author. In this study, we use a series of questionnaires to determine how researchers classify various scholarly assertions, using three distinct certainty classification systems. We find that there are three distinct categories of certainty along a spectrum from high to low. We show that these categories can be detected in an automated manner, using a machine learning model, with a cross-validation accuracy of 89.2% relative to an author-annotated corpus, and 82.2% accuracy against a publicly-annotated corpus. This finding provides an opportunity for contextual metadata related to certainty to be captured as a part of text-mining pipelines, which currently miss these subtle linguistic cues. We provide an exemplar machine-accessible representation-a Nanopublication-where certainty category is embedded as metadata in a formal, ontology-based manner within text-mined scholarly assertions.

Original languageEnglish
Article numbere8871
Pages (from-to)e8871
JournalPeerJ
Volume8
Issue number4
DOIs
StatePublished - 2020

Keywords

  • Certainty
  • FAIR Data
  • Machine learning
  • Scholarly communication
  • Text mining

Fingerprint

Dive into the research topics of 'Data-driven classification of the certainty of scholarly assertions'. Together they form a unique fingerprint.

Cite this