A parallel and distributed stochastic gradient descent implementation using commodity clusters

Robert Kennedy

Research output: Contribution to journalArticlepeer-review

5 Scopus citations


Deep Learning is an increasingly important subdomain of artificial intelligence, which benefits from training on Big Data. The size and complexity of the model combined with the size of the training dataset makes the training process very computationally and temporally expensive. Accelerating the training process of Deep Learning using cluster computers faces many challenges ranging from distributed optimizers to the large communication overhead specific to systems with off the shelf networking components. In this paper, we present a novel distributed and parallel implementation of stochastic gradient descent (SGD) on a distributed cluster of commodity computers. We use high-performance computing cluster (HPCC) systems as the underlying cluster environment for the implementation. We overview how the HPCC systems platform provides the environment for distributed and parallel Deep Learning, how it provides a facility to work with third party open source libraries such as TensorFlow, and detail our use of third-party libraries and HPCC functionality for implementation. We pro-vide experimental results that validate our work and show that our implementation can scale with respect to both dataset size and the number of compute nodes in the cluster.
Original languageAmerican English
Article number16
JournalJournal of Big Data
Issue number1
StatePublished - Dec 1 2019


  • Big data
  • Cluster computer
  • Deep learning
  • HPCC systems
  • Neural network
  • Parallel and distributed processing
  • Parallel stochastic gradient descent


Dive into the research topics of 'A parallel and distributed stochastic gradient descent implementation using commodity clusters'. Together they form a unique fingerprint.

Cite this