1D Tensor Parallelism
Author: Zhengda Bian, Yongbin Li
Example Code
Related Paper
Introduction
Tensor parallelism partitions model weights across multiple devices in order to reduce memory load. An efficient 1D tensor parallelism implementation was introduced by Megatron-LM.
Let's take a linear layer as an example, which consists of a GEMM . Given 2 processors, we split the columns of into , and calculate on each processor, which then forms . This is called a column-parallel fashion.
When a second linear layer follows the column-parallel one, we split into
which is called a row-parallel fashion. To calculate
we first calculate on each processor, then use an all-reduce to aggregate the results as .
We also need to note that in the backward pass, the column-parallel linear layer needs to aggregate the gradients of the input tensor , because on each processor we only have . Thus, we apply an all-reduce across the processors to get .
Efficiency
Given processors, we present the theoretical computation and memory cost, as well as the communication cost based on the ring algorithm in both the forward and backward pass of 1D tensor parallelism.
Computation | Memory (parameters) | Memory (activations) | Communication (bandwidth) | Communication (latency) |
---|---|---|---|---|
Usage
1D tensor parallelism is implemented by Shardformer
feature in the newest version of ColossalAI.
For more details about ideas and usages of Shardformer
, please refer to Shardformer Doc.