Skip to main content

A PyTorch native platform for training generative AI models

Project description

torchtitan

A PyTorch native platform for training generative AI models

8 GPU Feature Tests 8 GPU Model Tests arXiv ICLR forum license pip conda

torchtitan is under extensive development. To use the latest features of torchtitan, we recommend using the most recent PyTorch nightly.

Latest News

  • [2025/11] AMD released an optimized fork of torchtitan for AMD GPUs.
  • [2025/10] We released torchtitan v0.2.0.
  • [2025/10] SkyPilot now supports torchtitan! See the tutorial here.
  • [2025/07] We published instructions on how to add a model to torchtitan.
  • [2025/04] Our paper was accepted by ICLR 2025.
  • [2024/12] GPU MODE lecture on torchtitan.
  • [2024/07] Presentation at PyTorch Conference 2024.

Overview

torchtitan is a PyTorch native platform designed for rapid experimentation and large-scale training of generative AI models. As a minimal clean-room implementation of PyTorch native scaling techniques, torchtitan provides a flexible foundation for developers to build upon. With torchtitan extension points, one can easily create custom extensions tailored to specific needs.

Our mission is to accelerate innovation in the field of generative AI by empowering researchers and developers to explore new modeling architectures and infrastructure techniques.

The Guiding Principles when building torchtitan

  • Designed to be easy to understand, use and extend for different training purposes.
  • Minimal changes to the model code when applying multi-dimensional parallelism.
  • Bias towards a clean, minimal codebase while providing basic reusable / swappable components.

torchtitan has been showcasing PyTorch's latest distributed training features, via support for pretraining Llama 3.1 LLMs of various sizes.

Contributing

We look forward to your contributions!

  • To accelerate contributions to and innovations around torchtitan, we host an experiments folder. New ideas should start there. To contribute, follow the experiments guidelines.
  • For fixes and contributions to core, follow these guidelines.

Llama 3.1 training

Key features available

  1. Multi-dimensional composable parallelisms
  2. Meta device initialization
  3. Selective (layer or operator) and full activation checkpointing
  4. Distributed checkpointing (including async checkpointing)
  5. torch.compile support
  6. Float8 support (how-to)
  7. MXFP8 training for dense and MoE models on Blackwell GPUs.
  8. DDP and HSDP
  9. TorchFT integration
  10. Checkpointable data-loading, with the C4 dataset pre-configured (144M entries) and support for custom datasets
  11. Gradient accumulation, enabled by giving an additional --training.global_batch_size argument in configuration
  12. Flexible learning rate scheduler (warmup-stable-decay)
  13. Loss, GPU memory, throughput (tokens/sec), TFLOPs, and MFU displayed and logged via Tensorboard or Weights & Biases
  14. Debugging tools including CPU/GPU profiling, memory profiling, Flight Recorder, etc.
  15. All options easily configured via toml files
  16. Helper scripts to
    • download tokenizers from Hugging Face
    • convert original Llama 3 checkpoints into the expected DCP format
    • estimate FSDP/HSDP memory usage without materializing the model
    • run distributed inference with Tensor Parallel

We report performance on up to 512 GPUs, and verify loss converging correctness of various techniques.

Dive into the code

You may want to see how the model is defined or how parallelism techniques are applied. For a guided tour, see these files first:

Installation

One can directly run the source code, or install torchtitan from a nightly build, or a stable release.

From source

This method requires the nightly build of PyTorch, or the latest PyTorch built from source.

git clone https://github.com/pytorch/torchtitan
cd torchtitan
pip install -r requirements.txt

Nightly builds

This method requires the nightly build of PyTorch. You can replace cu126 with another version of cuda (e.g. cu128) or an AMD GPU (e.g. rocm6.3).

pip3 install --pre torch --index-url https://download.pytorch.org/whl/nightly/cu126 --force-reinstall
pip install --pre torchtitan --index-url https://download.pytorch.org/whl/nightly/cu126

Stable releases

One can install the latest stable release of torchtitan via pip or conda.

pip install torchtitan
conda install conda-forge::torchtitan

Note that each stable release pins the nightly versions of torch and torchao. Please see release.md for more details.

Downloading a tokenizer

torchtitan currently supports training Llama 3.1 (8B, 70B, 405B) out of the box. To get started training these models, we need to download the tokenizer. Follow the instructions on the official meta-llama repository to ensure you have access to the Llama model weights.

Once you have confirmed access, you can run the following command to download the Llama 3.1 tokenizer to your local machine.

# Get your HF token from https://huggingface.co/settings/tokens

# Llama 3.1 tokenizer
python scripts/download_hf_assets.py --repo_id meta-llama/Llama-3.1-8B --assets tokenizer --hf_token=...

Start a training run

Llama 3 8B model locally on 8 GPUs

CONFIG_FILE="./torchtitan/models/llama3/train_configs/llama3_8b.toml" ./run_train.sh

Multi-Node Training

For training on ParallelCluster/Slurm type configurations, you can use the multinode_trainer.slurm file to submit your sbatch job.

To get started adjust the number of nodes and GPUs

#SBATCH --ntasks=2
#SBATCH --nodes=2

Then start a run where nnodes is your total node count, matching the sbatch node count above.

srun torchrun --nnodes 2

If your gpu count per node is not 8, adjust --nproc_per_node in the torchrun command and #SBATCH --gpus-per-task in the SBATCH command section.

Citation

We provide a detailed look into the parallelisms and optimizations available in torchtitan, along with summary advice on when to use various techniques.

TorchTitan: One-stop PyTorch native solution for production ready LLM pre-training

@inproceedings{
   liang2025torchtitan,
   title={TorchTitan: One-stop PyTorch native solution for production ready {LLM} pretraining},
   author={Wanchao Liang and Tianyu Liu and Less Wright and Will Constable and Andrew Gu and Chien-Chin Huang and Iris Zhang and Wei Feng and Howard Huang and Junjie Wang and Sanket Purandare and Gokul Nadathur and Stratos Idreos},
   booktitle={The Thirteenth International Conference on Learning Representations},
   year={2025},
   url={https://openreview.net/forum?id=SFN6Wm7YBI}
}

License

Source code is made available under a BSD 3 license, however you may have other legal obligations that govern your use of other content linked in this repository, such as the license or terms of service for third-party data and models.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchtitan-0.2.1.tar.gz (326.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torchtitan-0.2.1-py3-none-any.whl (429.0 kB view details)

Uploaded Python 3

File details

Details for the file torchtitan-0.2.1.tar.gz.

File metadata

  • Download URL: torchtitan-0.2.1.tar.gz
  • Upload date:
  • Size: 326.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for torchtitan-0.2.1.tar.gz
Algorithm Hash digest
SHA256 0b288bd9eb3d775f20ba00d9d71449023c8b224e356fa46744b94aa863217ca8
MD5 4f6682dcc1744efe827213ce5d0ac811
BLAKE2b-256 f89743465aca4e1c1a0a430d432dcefeae01f72ffb77317d63eb676da70124fb

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchtitan-0.2.1.tar.gz:

Publisher: release.yml on pytorch/torchtitan

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file torchtitan-0.2.1-py3-none-any.whl.

File metadata

  • Download URL: torchtitan-0.2.1-py3-none-any.whl
  • Upload date:
  • Size: 429.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for torchtitan-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 d20fa8f8eb56ab21b59e3e1026c21c698dd1f4e53b971bd14db95a68f9c21320
MD5 f2d581553c47c3438c9e54b9eb3f0b3f
BLAKE2b-256 a4b95b2783f0630ab4c5ed971291649ffc3d8299d7287ab05639445ab2ba3934

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchtitan-0.2.1-py3-none-any.whl:

Publisher: release.yml on pytorch/torchtitan

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page