TY - JOUR
T1 - Evaluation of unsupervised static topic models’ emergence detection ability
AU - Li, Xue
AU - Esposito, Ciro D.
AU - Groth, Paul
AU - Sitruk, Jonathan
AU - Szatmari, Balazs
AU - Wijnberg, Nachoem
N1 - Publisher Copyright:
© (2025), (PeerJ Inc.). All rights reserved.
PY - 2025
Y1 - 2025
N2 - Detecting emerging topics is crucial for understanding research trends, technological advancements, and shifts in public discourse. While unsupervised topic modeling techniques such as Latent Dirichlet allocation (LDA), BERTopic, and CoWords clustering are widely used for topic extraction, their ability to retrospectively detect emerging topics without relying on ground truth labels has not been systematically compared. This gap largely stems from the lack of a dedicated evaluation metric for measuring emergence detection. In this study, we introduce a quantitative evaluation metric to assess the effectiveness of topic models in detecting emerging topics. We evaluate three topic modeling approaches using both qualitative analysis and our proposed emergence detection metric. Our results indicate that, qualitatively, CoWords identifies emerging topics earlier than LDA and BERTopics. Quantitatively, our evaluation metric demonstrates that LDA achieves an average F1 score of 80.6% in emergence detection, outperforming BERTopic by 24.0%. These findings highlight the strengths and limitations of different topic models for emergence detection, while our proposed metric provides a robust framework for future benchmarking in this area.
AB - Detecting emerging topics is crucial for understanding research trends, technological advancements, and shifts in public discourse. While unsupervised topic modeling techniques such as Latent Dirichlet allocation (LDA), BERTopic, and CoWords clustering are widely used for topic extraction, their ability to retrospectively detect emerging topics without relying on ground truth labels has not been systematically compared. This gap largely stems from the lack of a dedicated evaluation metric for measuring emergence detection. In this study, we introduce a quantitative evaluation metric to assess the effectiveness of topic models in detecting emerging topics. We evaluate three topic modeling approaches using both qualitative analysis and our proposed emergence detection metric. Our results indicate that, qualitatively, CoWords identifies emerging topics earlier than LDA and BERTopics. Quantitatively, our evaluation metric demonstrates that LDA achieves an average F1 score of 80.6% in emergence detection, outperforming BERTopic by 24.0%. These findings highlight the strengths and limitations of different topic models for emergence detection, while our proposed metric provides a robust framework for future benchmarking in this area.
KW - Static topic modeling
KW - Topic emergence detection
KW - Unsupervised topic modeling
UR - http://www.scopus.com/inward/record.url?scp=105007305640&partnerID=8YFLogxK
U2 - 10.7717/peerj-cs.2875
DO - 10.7717/peerj-cs.2875
M3 - Artículo
AN - SCOPUS:105007305640
SN - 2376-5992
VL - 11
JO - PeerJ Computer Science
JF - PeerJ Computer Science
M1 - e2875
ER -