Loss functions in knowledge graph embedding models

Sameh K. Mohamed, Vít Nováček, Pierre Yves Vandenbussche, Emir Muñoz

Research output: Contribution to journalConference articlepeer-review

17 Scopus citations

Abstract

Knowledge graph embedding (KGE) models have become popular for their efficient and scalable discoveries in knowledge graphs. The models learn low-rank vector representations from the knowledge graph entities and relations. Despite the rapid development of KGE models, state-of-the-art approaches have mostly focused on new ways to represent embeddings interaction functions (i.e., scoring functions). However, we argue that the choice of a training loss function can have a substantial impact on a model’s efficiency, which has been rather neglected by the state of the art so far. In this paper, we provide a thorough analysis of different loss functions that can help with the procedure of embedding learning, providing a reduction of the evaluation metric based error. We experiment with the most common loss functions for KGE models and also suggest a new loss for representing training error in KGE models. Our results show that a loss based on training error can enhance the performance of current models on multiple datasets.

Original languageEnglish
Pages (from-to)1-10
Number of pages10
JournalCEUR Workshop Proceedings
Volume2377
StatePublished - 2019
Externally publishedYes
Event2019 Workshop on Deep Learning for Knowledge Graphs, DL4KG 2019 - Portoroz, Slovenia
Duration: Jun 2 2019 → …

Fingerprint

Dive into the research topics of 'Loss functions in knowledge graph embedding models'. Together they form a unique fingerprint.

Cite this