Evaluation of unsupervised static topic models’ emergence detection ability

Xue Li, Ciro D. Esposito, Paul Groth, Jonathan Sitruk, Balazs Szatmari, Nachoem Wijnberg

Research output: Contribution to journalArticlepeer-review

Abstract

Detecting emerging topics is crucial for understanding research trends, technological advancements, and shifts in public discourse. While unsupervised topic modeling techniques such as Latent Dirichlet allocation (LDA), BERTopic, and CoWords clustering are widely used for topic extraction, their ability to retrospectively detect emerging topics without relying on ground truth labels has not been systematically compared. This gap largely stems from the lack of a dedicated evaluation metric for measuring emergence detection. In this study, we introduce a quantitative evaluation metric to assess the effectiveness of topic models in detecting emerging topics. We evaluate three topic modeling approaches using both qualitative analysis and our proposed emergence detection metric. Our results indicate that, qualitatively, CoWords identifies emerging topics earlier than LDA and BERTopics. Quantitatively, our evaluation metric demonstrates that LDA achieves an average F1 score of 80.6% in emergence detection, outperforming BERTopic by 24.0%. These findings highlight the strengths and limitations of different topic models for emergence detection, while our proposed metric provides a robust framework for future benchmarking in this area.

Original languageEnglish
Article numbere2875
JournalPeerJ Computer Science
Volume11
DOIs
StatePublished - 2025

Keywords

  • Static topic modeling
  • Topic emergence detection
  • Unsupervised topic modeling

Fingerprint

Dive into the research topics of 'Evaluation of unsupervised static topic models’ emergence detection ability'. Together they form a unique fingerprint.

Cite this