Easy way to efficiently run 100B+ language models without high-end GPUs
Project description
Run 100B+ language models at home, BitTorrent-style.
Fine-tuning and inference up to 10x faster than offloading
Generate text using distributed 176B-parameter BLOOM or BLOOMZ and fine-tune them for your own tasks:
from petals import DistributedBloomForCausalLM
model = DistributedBloomForCausalLM.from_pretrained("bigscience/bloom-petals", tuning_mode="ptune", pre_seq_len=16)
# Embeddings & prompts are on your device, BLOOM blocks are distributed across the Internet
inputs = tokenizer("A cat sat", return_tensors="pt")["input_ids"]
outputs = model.generate(inputs, max_new_tokens=5)
print(tokenizer.decode(outputs[0])) # A cat sat on a mat...
# Fine-tuning (updates only prompts or adapters hosted locally)
optimizer = torch.optim.AdamW(model.parameters())
for input_ids, labels in data_loader:
outputs = model.forward(input_ids)
loss = cross_entropy(outputs.logits, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
🔏 Your data will be processed by other people in the public swarm. Learn more about privacy here. For sensitive data, you can set up a private swarm among people you trust.
Connect your GPU and increase Petals capacity
Run our Docker image (works on Linux, macOS, and Windows with WSL2):
sudo docker run -p 31330:31330 --ipc host --gpus all --volume petals-cache:/cache --rm \
learningathome/petals:main python -m petals.cli.run_server bigscience/bloom-petals --port 31330
Or run these commands in an Anaconda env (requires Linux and Python 3.7+):
conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia
pip install -U petals
python -m petals.cli.run_server bigscience/bloom-petals
📚 See FAQ to learn how to configure the server to use multiple GPUs, address common issues, etc.
You can also host BLOOMZ, a version of BLOOM fine-tuned to follow human instructions in the zero-shot regime — just replace bloom-petals
with bloomz-petals
.
🔒 Hosting a server does not allow others to run custom code on your computer. Learn more about security here.
💬 If you have any issues or feedback, let us know on our Discord server!
Check out tutorials, examples, and more
Basic tutorials:
- Getting started: tutorial
- Prompt-tune BLOOM to create a personified chatbot: tutorial
- Prompt-tune BLOOM for text semantic classification: tutorial
Useful tools and advanced guides:
- Chatbot web app (connects to Petals via an HTTP endpoint): source code
- Monitor for the public swarm: source code
- Launch your own swarm: guide
- Run a custom foundation model: guide
Learning more:
📋 If you build an app running BLOOM with Petals, make sure it follows the BLOOM's terms of use.
How does it work?
- Petals runs large language models like BLOOM-176B collaboratively — you load a small part of the model, then team up with people serving the other parts to run inference or fine-tuning.
- Single-batch inference runs at ≈ 1 sec per step (token) — up to 10x faster than offloading, enough for chatbots and other interactive apps. Parallel inference reaches hundreds of tokens/sec.
- Beyond classic language model APIs — you can employ any fine-tuning and sampling methods, execute custom paths through the model, or see its hidden states. You get the comforts of an API with the flexibility of PyTorch.
📚 See FAQ 📜 Read paper
Installation
Here's how to install Petals with Anaconda on Linux:
conda install pytorch pytorch-cuda=11.7 -c pytorch -c nvidia
pip install -U petals
If you don't use Anaconda, you can install PyTorch in any other way. If you want to run models with 8-bit weights, please install PyTorch with CUDA 11.x or newer for compatility with bitsandbytes.
See the instructions for macOS and Windows, the full requirements, and troubleshooting advice in our FAQ.
Benchmarks
The benchmarks below are for BLOOM-176B:
Network | Single-batch inference (steps/s) |
Parallel forward (tokens/s) |
|||
---|---|---|---|---|---|
Bandwidth | Round-trip latency |
Sequence length | Batch size | ||
128 | 2048 | 1 | 64 | ||
Offloading, max. possible speed on 1x A100 1 | |||||
256 Gbit/s | 0.18 | 0.18 | 2.7 | 170.3 | |
128 Gbit/s | 0.09 | 0.09 | 2.4 | 152.8 | |
Petals on 14 heterogeneous servers across Europe and North America 2 | |||||
Real world | 0.83 | 0.79 | 32.6 | 179.4 | |
Petals on 3 servers, with one A100 each 3 | |||||
1 Gbit/s | < 5 ms | 1.71 | 1.54 | 70.0 | 253.6 |
100 Mbit/s | < 5 ms | 1.66 | 1.49 | 56.4 | 182.0 |
100 Mbit/s | 100 ms | 1.23 | 1.11 | 19.7 | 112.2 |
1 An upper bound for offloading performance. We base our offloading numbers on the best possible hardware setup for offloading: CPU RAM offloading via PCIe 4.0 with 16 PCIe lanes per GPU and PCIe switches for pairs of GPUs. We assume zero latency for the upper bound estimation. In 8-bit, the model uses 1 GB of memory per billion parameters. PCIe 4.0 with 16 lanes has a throughput of 256 Gbit/s, so offloading 176B parameters takes 5.5 seconds. The throughput is twice as slow (128 Gbit/s) if we have two GPUs behind the same PCIe switch.
2 A real-world distributed setting with 14 servers holding 2× RTX 3060, 4× 2080Ti, 2× 3090, 2× A4000, and 4× A5000 GPUs. These are personal servers and servers from university labs, spread across Europe and North America and connected to the Internet at speeds of 100–1000 Mbit/s. 4 servers operate from under firewalls.
3 An optimistic setup that requires least communication. The client nodes have 8 CPU cores and no GPU.
We provide more evaluations and discuss these results in more detail in Section 3.3 of our paper.
🛠️ Contributing
Please see our FAQ on contributing.
📜 Citation
Alexander Borzunov, Dmitry Baranchuk, Tim Dettmers, Max Ryabinin, Younes Belkada, Artem Chumachenko, Pavel Samygin, and Colin Raffel. Petals: Collaborative Inference and Fine-tuning of Large Models. arXiv preprint arXiv:2209.01188, 2022.
@article{borzunov2022petals,
title = {Petals: Collaborative Inference and Fine-tuning of Large Models},
author = {Borzunov, Alexander and Baranchuk, Dmitry and Dettmers, Tim and Ryabinin, Max and Belkada, Younes and Chumachenko, Artem and Samygin, Pavel and Raffel, Colin},
journal = {arXiv preprint arXiv:2209.01188},
year = {2022},
url = {https://arxiv.org/abs/2209.01188}
}
This project is a part of the BigScience research workshop.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.