Skip to main navigation Skip to search Skip to main content

A Study of Pipeline Parallelism in Deep Neural Networks

Research output: Contribution to journalArticlepeer-review

Abstract

The current popularity in the application of artificial intelligence to solve complex problems is growing. The appearance of chats based on artificial intelligence or natural language processing has generated the creation of increasingly large and sophisticated neural network models, which are the basis of current developments in artificial intelligence. These neural networks can be composed of billions of parameters and their training is not feasible without the application of approaches based on parallelism. This paper focuses on studying pipeline parallelism, which is one of the most important types of parallelism used to train neural network models in deep learning. In this study we offer a look at the most important concepts related to the topic and we present a detailed analysis of 3 pipeline parallelism libraries: Torchgpipe, FairScale, and DeepSpeed. We analyze important aspects of these libraries such as their implementation and features. In addition, we evaluated them experimentally, carrying out parallel trainings and taking into account aspects such as the number of stages in the training pipeline and the type of balance.

Original languageEnglish
Pages (from-to)48-59
Number of pages12
JournalRevista Colombiana de Computacion
Volume25
Issue number1
DOIs
StatePublished - 30 Jan 2024

Keywords

  • Deep learning
  • artificial neural networks
  • distributed training
  • parallelism

Fingerprint

Dive into the research topics of 'A Study of Pipeline Parallelism in Deep Neural Networks'. Together they form a unique fingerprint.

Cite this