Scalable Training for Foundation Models with Named Tensors and JAX
Project description
Levanter
You could not prevent a thunderstorm, but you could use the electricity; you could not direct the wind, but you could trim your sail so as to propel your vessel as you pleased, no matter which way the wind blew.
— Cora L. V. Hatch
Levanter is a framework for training large language models (LLMs) and other foundation models that strives for legibility, scalability, and reproducibility:
- Legible: Levanter uses our named tensor library Haliax to write easy-to-follow, composable deep learning code, while still being high performance.
- Scalable: Levanter scales to large models, and to be able to train on a variety of hardware, including GPUs and TPUs.
- Reproducible: Levanter is bitwise deterministic, meaning that the same configuration will always produce the same results, even in the face of preemption and resumption.
We built Levanter with JAX, Equinox, and Haliax.
Features
- Distributed Training: We support distributed training on TPUs (and soon, GPUs), including FSDP and tensor parallelism.
- Compatibility: Levanter supports importing and exporting models to/from the Hugging Face ecosystem, including tokenizers, datasets, and models via SafeTensors.
- Performance: Levanter's performance rivals commercially-backed frameworks like MosaicML's Composer or Google's MaxText.
- Reproducibility: Levanter is bitwise deterministic, meaning that the same configuration will always produce the same results, even in the face of preemption and resumption.
- Cached On-Demand Data Preprocessing: We preprocess corpora online, but we cache the results of preprocessing so that resumes are much faster and so that subsequent runs are even faster. As soon as the first part of the cache is complete, Levanter will start training.
- Logging: Logging is done with WandB, complete with a fancy online visualization of the validation set during training.
- Distributed Checkpointing: Distributed checkpointing is supported via Google's TensorStore library. Training can even be resumed on a different number of hosts, though this breaks reproducibility for now.
- Optimization: Levanter uses Optax for optimization. Our new optimizer, Sophia, is coming to Levanter soon!
Levanter was created by Stanford's Center for Research on Foundation Models (CRFM)'s research engineering team. (We're hiring!) You can also find us in the #levanter channel on the unofficial Jax LLM Discord
Getting Started
Here is a small set of examples to get you started. For more information about the various configuration options,
please see the Getting Started guide or the In-Depth Configuration Guide.
You can also use --help
or poke around other configs to see all the options available to you.
Documentation
Levanter's documentation is available at levanter.readthedocs.io. Haliax's documentation is available at haliax.readthedocs.io.
Installing Levanter
After installing JAX with the appropriate configuration for your platform, you can install Levanter with:
pip install levanter
or using the latest version from GitHub:
git clone https://github.com/stanford-crfm/levanter.git
cd levanter
pip install -e .
wandb login # optional, we use wandb for logging
If you're developing Haliax and Levanter at the same time, you can do something like.
git clone https://github.com/stanford-crfm/levanter.git
cd levanter
pip install -e .
cd ..
git clone https://github.com/stanford-crfm/haliax.git
cd haliax
pip install -e .
cd ../levanter
Please refer to the Installation Guide for more information on how to install Levanter.
If you're using a TPU, more complete documentation for setting that up is available here. GPU support is still in-progress; documentation is available here.
Training a GPT2-nano
As a kind of hello world, here's how you can train a GPT-2 "nano"-sized model on a small dataset.
python -m levanter.main.train_lm --config_path config/gpt2_nano.yaml
# alternatively, if you didn't use -e and are in a different directory
python -m levanter.main.train_lm --config_path gpt2_nano
This will train a GPT2-nano model on the WikiText-103 dataset.
Training a GPT2-small on your own data
You can also change the dataset by changing the dataset
field in the config file.
If your dataset is a Hugging Face dataset, you can use the data.id
field to specify it:
python -m levanter.main.train_lm --config_path config/gpt2_small.yaml --data.id openwebtext
# optionally, you may specify a tokenizer and/or a cache directory, which may be local or on gcs
python -m levanter.main.train_lm --config_path config/gpt2_small.yaml --data.id openwebtext --data.tokenizer "EleutherAI/gpt-neox-20b" --data.cache_dir "gs://path/to/cache/dir"
If instead your data is a list of URLs, you can use the data.train_urls
and data.validation_urls
fields to specify them.
Data URLS can be local files, gcs files, or http(s) URLs, or anything that fsspec supports.
Levanter (really, fsspec) will automatically uncompress .gz
and .zstd
files, and probably other formats too.
python -m levanter.main.train_lm --config_path config/gpt2_small.yaml --data.train_urls ["https://path/to/train/data_*.jsonl.gz"] --data.validation_urls ["https://path/to/val/data_*.jsonl.gz"]
Customizing a Config File
You can modify the config file to change the model, the dataset, the training parameters, and more. Here's
the gpt2_small.yaml
file:
data:
train_urls:
- "gs://pubmed-mosaic/openwebtext-sharded/openwebtext_train.{1..128}-of-128.jsonl.gz"
validation_urls:
- "gs://pubmed-mosaic/openwebtext-sharded/openwebtext_val.{1..8}-of-8.jsonl.gz"
cache_dir: "gs://pubmed-mosaic/tokenized/openwebtext/"
model:
gpt2:
hidden_dim: 768
num_heads: 12
num_layers: 12
seq_len: 1024
gradient_checkpointing: true
scale_attn_by_inverse_layer_idx: true
trainer:
wandb:
project: "levanter"
tags: [ "openwebtext", "gpt2"]
mp: p=f32,c=bfloat16
model_axis_size: 1
per_device_parallelism: 4
train_batch_size: 512
optimizer:
learning_rate: 6E-4
weight_decay: 0.1
min_lr_ratio: 0.1
Other Architectures
Currently, we support the following architectures:
- GPT-2
- LLama 1 or 2
- Backpacks
- MosaicML's MPT
We plan to add more in the future.
Continued Pretraining with Llama 1 or Llama 2
Here's an example of how to continue pretraining a Llama 1 or Llama 2 model on the OpenWebText dataset:
python -m levanter.main.train_lm --config_path config/llama2_7b_continued.yaml
Distributed and Cloud Training
Training on a TPU Cloud VM
Please see the TPU Getting Started guide for more information on how to set up a TPU Cloud VM and run Levanter there.
Training with CUDA
Please see the CUDA Getting Started guide for more information on how to set up a CUDA environment and run Levanter there.
Contributing
We welcome contributions! Please see CONTRIBUTING.md for more information.
License
Levanter is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file levanter-1.1.tar.gz
.
File metadata
- Download URL: levanter-1.1.tar.gz
- Upload date:
- Size: 158.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6eb8d0183c677183874daa314993d34be9733f6f3f824895bc85e91c0a4e4000 |
|
MD5 | af88f117f807d4804ce50b3e6fc56bdd |
|
BLAKE2b-256 | 4e9356c7579e18b7279de12dc2413b615f0b5f88c3c52dc207fdd0384e1f4672 |
File details
Details for the file levanter-1.1-py3-none-any.whl
.
File metadata
- Download URL: levanter-1.1-py3-none-any.whl
- Upload date:
- Size: 157.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7a24b719a3dbc428743fc2f80df5334d36d1d02f7359e523f4664c2eddcabf76 |
|
MD5 | ee725ef385e354b8c352e0b6e8500ea6 |
|
BLAKE2b-256 | 082901d2682c3518cc727442340cb2505218de1bd0c398f6b745eed7994d1732 |