Complex Query Answering with Neural Link Predictors

Daniel Daza, Michael Cochez, Erik Arakelyan, Pasquale Minervini

Research output: Contribution to journalArticle

Abstract

Neural link predictors are immensely useful for identifying missing edges in large
scale Knowledge Graphs. However, it is still not clear how to use these models
for answering more complex queries that arise in a number of domains, such as
queries using logical conjunctions (∧), disjunctions (∨) and existential quantifiers
(∃), while accounting for missing edges. In this work, we propose a framework
for efficiently answering complex queries on incomplete Knowledge Graphs. We
translate each query into an end-to-end differentiable objective, where the truth
value of each atom is computed by a pre-trained neural link predictor. We then
analyse two solutions to the optimisation problem, including gradient-based and
combinatorial search. In our experiments, the proposed approach produces more
accurate results than state-of-the-art methods — black-box neural models trained
on millions of generated queries — without the need of training on a large and
diverse set of complex queries. Using orders of magnitude less training data, we
obtain relative improvements ranging from 8% up to 40% in Hits@3 across different knowledge graphs containing factual information. Finally, we demonstrate
that it is possible to explain the outcome of our model in terms of the intermediate
solutions identified for each of the complex query atoms.
Original languageAmerican English
JournalARXIV
DOIs
StatePublished - 2121

Fingerprint Dive into the research topics of 'Complex Query Answering with Neural Link Predictors'. Together they form a unique fingerprint.

Cite this