Automatically shard your large model between multiple GPUs, works without torch.distributed
Project description
tensor_parallel
Run large PyTorch models on multiple GPUs in one line of code.
import transformers
import tensor_parallel as tp
tokenizer = transformers.AutoTokenizer.from_pretrained("facebook/opt-13b")
model = transformers.AutoModelForCausalLM.from_pretrained("facebook/opt-13b") # use opt-125m for testing
model = tp.tensor_parallel(model, ["cuda:0", "cuda:1"]) # <- each GPU has half the weights
inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"].to("cuda:0")
outputs = model.generate(inputs, num_beams=5)
print(tokenizer.decode(outputs[0])) # A cat sat on my lap for a few minutes ...
model(input_ids=inputs, labels=inputs).loss.backward() # training works as usual
Installation
Latest stable version (recommended):
pip install tensor_parallel
Bleeding edge version:
pip install https://github.com/BlackSamorez/tensor_parallel/archive/main.zip
Usage
Simply wrap your PyTorch model with tp.tensor_parallel
and use it normally.
For best memory efficiency, call tp.tensor_parallel
while the model is still on CPU.
Here's a few use cases:
examples/training_flan-t5-xl.ipynb
- fine-tune full FLAN-T5 model on text summarization- TBA - inferencing a large language model with LLM.8bit + tensor_parallel
- TBA - defining custom parallelism strategy
Advanced parameters to tensor_parallel
:
device_ids: List[device]
- which devices to use; defaults to all available GPUsoutput_device: device
- model outputs will have this deviceconfig: tp.Config
- use custom parallelism strategy, seeslicing_configs.py
distributed: bool
- if True, use torch.distributed backend instead of threading (requirestorchrun
)sharded: bool
- if True, find all trainable parameters that weren't split by Tensor Parallelism and split them using ZeRO-3 algorithm.- weights will be split between GPUs and re-assembled before each forward pass
- TL;DR use this when training to avoid duplicate parameters (enabled by default!)
sharded_param_names: List[str]
- parameter names that should be sharded this way, default = found automatically
FAQ
-
Q: I don't have a multi-GPU server. Can I use tensor_parallel in Google Colab?
-
A: Colab has a single GPU, so there's no point in tensor parallelism. However, Kaggle offers two T4 for free to all phone-verified accounts.
-
Q: What is tensor parallelism?
-
A: You split each layer's weights into parts, multiply each part on a separate GPU, then gather results. Read more here
-
Q: Should I use
TensorParallel
orDataParallel
? -
A: TensorParallel for large models, DataParallel for smaller ones
-
Q: How does it compare against FullyShardedDataParallel and ZeRO?
-
A: ZeRO is better if you can fit a large batch, TensorParallel is better for small batches
Why use tensor_parallel
...
- v.s. DeepSpeed and FairScale
- DeepSpeed has many parallelization strategies, but requires careful configuration
- tensor_parallel has one strategy that works with 1 line of code
- tensor_parallel works in a jupyter notebook
- v.s. MegatronLM?
- MegatronLM has great tensor parallelism for one model architecture
- tensor_parallel has good parallelism for any architecture
- tensor_parallel is way easier to install
- v.s. parallelformers?
- parallelformers implements a fixed list of architectures
- tensor_parallel works for any architecture automatically
- parallelformers is inference-only, tensor_parallel supports training
- v.s.
alpa
- alpa is a powerful tool for automatic distributed training / inference in JAX
- tensor_parallel works with PyTorch
- v.s.
Model.parallelize()
?- both are easy to use, both fit large models
- in parallelize, one GPU works at a time
- in tensor_parallel, GPUs work in parallel
In short, use tensor_parallel
for quick prototyping on a single machine.
Use DeepSpeed+Megatron or alpa for million-dollar training runs.
Troubleshooting
If you experience NCCL errors, or random hanging, you may have some code errors that are not displayed properly.
To debug these errors, we recommend restarting with export TENSOR_PARALLEL_USE_NATIVE=1
or a on single device.
If you found a bug or encountered a problem, please report it to our issue tracker.
We will do our best to help, but it may take some time before we get to it.
Please create issues only if your problem is specifically with tensor_parallel
.
For example, if you need help installing transformers
or optimizing your code, please seek it elsewhere.
Code style
We use black and isort for all pull requests.
Before committing your code, simply run black . && isort .
and you will be fine.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for tensor_parallel-1.0.23-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 62bd5182a13691d33c382d948a5af979a360fa0821919e487bb3ce5052138e15 |
|
MD5 | 30f1d69a77d079109fc4ff04b6cd1d33 |
|
BLAKE2b-256 | a7be5b05345d0620235045a5bdaf68c6fcf48a2f32b04a68ea534c0fd351e4fd |