Abstract
Knowledge graph embedding (KGE) models have become popular for their efficient and scalable discoveries in knowledge graphs. The models learn low-rank vector representations from the knowledge graph entities and relations. Despite the rapid development of KGE models, state-of-the-art approaches have mostly focused on new ways to represent embeddings interaction functions (i.e., scoring functions). However, we argue that the choice of a training loss function can have a substantial impact on a model’s efficiency, which has been rather neglected by the state of the art so far. In this paper, we provide a thorough analysis of different loss functions that can help with the procedure of embedding learning, providing a reduction of the evaluation metric based error. We experiment with the most common loss functions for KGE models and also suggest a new loss for representing training error in KGE models. Our results show that a loss based on training error can enhance the performance of current models on multiple datasets.
Original language | English |
---|---|
Pages (from-to) | 1-10 |
Number of pages | 10 |
Journal | CEUR Workshop Proceedings |
Volume | 2377 |
State | Published - 2019 |
Externally published | Yes |
Event | 2019 Workshop on Deep Learning for Knowledge Graphs, DL4KG 2019 - Portoroz, Slovenia Duration: Jun 2 2019 → … |