No project description provided
Project description
Red Coast (redco) is a lightweight and user-friendly tool designed to automate distributed training and inference for large models while simplifying the ML pipeline development process without necessitating MLSys expertise from users.
Check out our Tech Report for details! Here is also a Quick Tutorial for you to become an expert of distributed training with Redco in several minutes!
- Redco allows for the simple implementation of distributed training and inference, eliminating the need for additional coding efforts or complex configurations, but still exhibits efficiency comparable to the most advanced model parallel tools.
- Redco enables customization of arbitrary ML pipelines within three functions, eliminating repetitive ans boilerplate coding, such as multi-host related processing, etc. We demonstrate that this mechanism is widely applicable to various ML algorithms
- The backend of Redco is based on JAX, but users doesn't need to be JAX experts. Knowing
numpy
is good enough!
Installation
Install Redco
pip install redco
Adjust Jax & Flax versions
The command above would automatically install cpu version of jax, so the version of Jax need to be adjusted based on your device. For example,
pip install --upgrade flax==0.7.0
pip install --upgrade jax[cuda11_pip]==0.4.13 -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
Jax version (==0.4.13
) and Flax version (==0.7.0
) can be flexible, as long as they match your CUDA/CUDNN/NCCL version.
Besides, the Flax modeling in the HuggingFace implementation sometimes doesn't support the most recent Jax & Flax versions.
If you are using TPU/CPU/AMD/Apple, see here for corresponding installation commands.
Examples
Examples across a set of paradigms can be found in examples/, including
- Classification/regression (GLUE & MNIST)
- Faderated learning (FedAvg)
- Image to text (Image captioning)
- Language modeling (Instruction Tuning of LLMs)
- Meta learning (MAML)
- Reinforcement learning (PPO & DDPG)
- Text to image (StableDiffusion)
- Text to text (Seq2seq)
Exemplar large model settings
The table below shows runnable model LLM finetuning on different kinds of servers. Numbers inside the brackets are the maximum length in training. All the settings are with full precision (fp32) and Adam optimizer.
2 $\times$ 1080Ti (2 $\times$ 10G) |
4 $\times$ A100 (4 $\times$ 40G) |
2 $\times$ TPU-v4 (2 hosts $\times$ 4 chips $\times$ 32G) |
16 $\times$ TPU-v4 (16 hosts $\times$ 4 chips $\times$ 32G) |
---|---|---|---|
BART-Large (1024) | LLaMA-7B (1024) | T5-XL-11B (512) | OPT-66B (512) |
GPT2-Large (512) | GPT-J-6B (1024) | OPT-13B (1024) |
Go to example/language_modeling and examples/text_to_text to try them out!
Reference
We now have a paper you can cite for the Red Coast library:
Redco: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs
Bowen Tan, Yun Zhu, Lijuan Liu, Hongyi Wang, Yonghao Zhuang, Jindong Chen, Eric Xing, Zhiting Hu
Mlsys Workshop @ NeurIPS 2023
@article{tan2023redco,
title={Redco: A Lightweight Tool to Automate Distributed Training of LLMs on Any GPU/TPUs},
author={Tan, Bowen and Zhu, Yun and Liu, Lijuan and Wang, Hongyi and Zhuang, Yonghao and Chen, Jindong and Xing, Eric and Hu, Zhiting},
journal={arXiv preprint arXiv:2310.16355},
year={2023}
}
Acknowledgement
The name of this package, Redco, is inspired by Red Coast Base, a key location in the story of Three-Body. From Red Coast Base, humanity broadcasts its first message into the vast universe. We thank Cixin Liu for such a masterpiece!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file redco-0.4.14.tar.gz
.
File metadata
- Download URL: redco-0.4.14.tar.gz
- Upload date:
- Size: 20.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 colorama/0.4.4 importlib-metadata/4.6.4 keyring/23.5.0 pkginfo/1.8.2 readme-renderer/34.0 requests-toolbelt/0.9.1 requests/2.25.1 rfc3986/1.5.0 tqdm/4.57.0 urllib3/1.26.5 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 51571de597346e87092660f957b754edf79c6e05b46a78c0945fbebd36c10fcb |
|
MD5 | ba8956006bfdcdddec379304872ad37a |
|
BLAKE2b-256 | ad6e058febd39a8051b4e4a75c80cf19582c79b137cbb0e89a1d5dfe8a840c4d |
File details
Details for the file redco-0.4.14-py3-none-any.whl
.
File metadata
- Download URL: redco-0.4.14-py3-none-any.whl
- Upload date:
- Size: 28.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.8.0 colorama/0.4.4 importlib-metadata/4.6.4 keyring/23.5.0 pkginfo/1.8.2 readme-renderer/34.0 requests-toolbelt/0.9.1 requests/2.25.1 rfc3986/1.5.0 tqdm/4.57.0 urllib3/1.26.5 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 33a7a1023df44bd2324dcef77d154798fb9ae0cf0fc9643cd4ae354d5820b270 |
|
MD5 | 413609674fcebec6ea73892bb12c670c |
|
BLAKE2b-256 | 151aa8b2447bacdf8f8a205b0411360145a0df348eb2b12957d5ae3e6e570afc |