Projects per year
Deep Learning is an increasingly important subdomain of artificial intelligence, which benefits from training on Big Data. The size and complexity of the model combined with the size of the training dataset makes the training process very computationally and temporally expensive. Accelerating the training process of Deep Learning using cluster computers faces many challenges ranging from distributed optimizers to the large communication overhead specific to systems with off the shelf networking components. In this paper, we present a novel distributed and parallel implementation of stochastic gradient descent (SGD) on a distributed cluster of commodity computers. We use high-performance computing cluster (HPCC) systems as the underlying cluster environment for the implementation. We overview how the HPCC systems platform provides the environment for distributed and parallel Deep Learning, how it provides a facility to work with third party open source libraries such as TensorFlow, and detail our use of third-party libraries and HPCC functionality for implementation. We pro-vide experimental results that validate our work and show that our implementation can scale with respect to both dataset size and the number of compute nodes in the cluster.
|Original language||American English|
|Journal||Journal of Big Data|
|State||Published - Dec 1 2019|
- Big data
- Cluster computer
- Deep learning
- HPCC systems
- Neural network
- Parallel and distributed processing
- Parallel stochastic gradient descent
FingerprintDive into the research topics of 'A parallel and distributed stochastic gradient descent implementation using commodity clusters'. Together they form a unique fingerprint.
- 1 Finished
Center for Advanced Knowledge Enablement membership with Florida Atlantic University
Kennedy, R. & Villanustre, F.
07/6/18 → 07/6/19