Distributed similarity joins on big textual data: Toward a robust cost-based framework

Fabian Fier, Johan-Christoph Freytag

Research output: Contribution to journalArticlepeer-review

Abstract

Motivated by increasing dataset sizes, various MapReducebased similarity join algorithms have emerged. In our past work (to appear), we compared nine of the most prominent algorithms experimentally. Surprisingly, we found that their runtimes become inhibitively long for only moderately large datasets. There are two main reasons. First, data grouping and replication between Map and Reduce relies on input data characteristics such as word distribution. A skewed distribution as it is common for textual data leads to data groups which reveal very unequal computation costs, leading to Straggling Reducer issues. Second, each Reduce instance only has limited main memory. Data spilling also leads to Straggling Reducers. In order to leverage parallelization, all approaches we investigated rely on high replication and hit this memory limit even with relatively small input data. In this work, we propose an initial approach toward a join framework to overcome both of these issues. It includes a cost-based grouping and replication strategy which is robust against large data sizes and various data characteristics such as skew. Furthermore, we propose an addition to the MapReduce programming paradigm. It unblocks the Reduce execution by running Reducers on partial intermediate datasets, allowing for arbitrarily large data sets between Map and Reduce.
Original languageAmerican English
JournalCEUR Workshop Proceedings
StatePublished - Jun 1 2017

Fingerprint

Dive into the research topics of 'Distributed similarity joins on big textual data: Toward a robust cost-based framework'. Together they form a unique fingerprint.

Cite this