Skip to main content

No project description provided

Project description

torchft

Easy Per Step Fault Tolerance for PyTorch

| Documentation | Poster | Design Doc |

PyPI - Version


This repository implements techniques for doing a per-step fault tolerance so you can keep training if errors occur without interrupting the entire training job.

This is based on the large scale training techniques presented at PyTorch Conference 2024.

Overview

torchft is designed to provide the primitives required to implement fault tolerance in any application/train script as well as the primitives needed to implement custom fault tolerance strategies.

Out of the box, torchft provides the following algorithms:

  • Fault Tolerant DDP
  • Fault Tolerant HSDP: fault tolerance across the replicated dimension with any mix of FSDP/TP/etc across the other dimensions.
  • LocalSGD
  • DiLoCo

To implement these, torchft provides some key reusable components:

  1. Coordination primitives that can determine which workers are healthy via heartbeating on a per-step basis
  2. Fault tolerant ProcessGroup implementations that report errors sanely and be reinitialized gracefully.
  3. Checkpoint transports that can be used to do live recovery from a healthy peer when doing scale up operations.

The following component diagram shows the high level components and how they relate to each other:

Component Diagram

See torchft's documentation for more details.

Examples

torchtitan (Fault Tolerant HSDP)

torchtitan provides an out of the box fault tolerant HSDP training loop built on top of torchft that can be used to train models such as Llama 3 70B.

It also serves as a good example of how you can integrate torchft into your own training script for use with HSDP.

See torchtitan's documentation for end to end usage.

Fault Tolerant DDP

We have a minimal DDP train loop that highlights all of the key components in torchft.

See train_ddp.py for more info.

DiLoCo

LocalSGD and DiLoCo are currently experimental.

See the diloco_train_loop/local_sgd_train_loop tests for an example on how to integrate these algorithms into your training loop.

Design

torchft is designed to allow for fault tolerance when using training with replicated weights such as in DDP or HSDP (FSDP with DDP).

See the design doc for the most detailed explanation.

Lighthouse

torchft implements a lighthouse server that coordinates across the different replica groups and then a per replica group manager and fault tolerance library that can be used in a standard PyTorch training loop.

This allows for membership changes at the training step granularity which can greatly improve efficiency by avoiding stopping the world training on errors.

Lighthouse Diagram

Fault Tolerant HSDP Algorithm

torchft provides an implementation of a fault tolerant HSDP/DDP algorithm. The following diagram shows the high level operations that need to happen in the train loop to ensure everything stays consistent during a healing operation.

HSDP Diagram

See the design doc linked above for more details.

Installing from PyPI

We have nighty builds available at https://pypi.org/project/torchft-nightly/

To install torchft with minimal dependencies you can run:

pip install torchft-nightly

If you want all development dependencies you can install:

pip install torchft-nightly[dev]

Installing from Source

Prerequisites

Before proceeding, ensure you have the following installed:

  • Rust (with necessary dependencies)
  • protobuf-compiler and the corresponding development package for Protobuf.
  • PyTorch 2.7 RC+ or Nightly

Note that the Rust versions available in many conda environments may be outdated. To install the latest version of Rust, we recommend downloading it directly from the official website as shown in the below command:

curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh

To install the required packages on a Debian-based system (such as Ubuntu) using apt, run:

sudo apt install protobuf-compiler libprotobuf-dev

or for a Red Hat-based system, run:

sudo dnf install protobuf-compiler protobuf-devel

Installation

pip install .

This uses pyo3+maturin to build the package, you'll need maturin installed.

If the installation command fails to invoke cargo update due to an inability to fetch the manifest, it may be caused by the proxy, proxySSLCert, and proxySSLKey settings in your .gitconfig file affecting the cargo command. To resolve this issue, try temporarily removing these fields from your .gitconfig before running the installation command.

To install in editable mode w/ the Rust extensions and development dependencies, you can use the normal pip install command:

pip install -e '.[dev]'

Usage

Lighthouse

The lighthouse is used for fault tolerance across replicated workers (DDP/FSDP) when using synchronous training.

You can start a lighthouse server by running:

RUST_BACKTRACE=1 torchft_lighthouse --min_replicas 1 --quorum_tick_ms 100 --join_timeout_ms 10000

Example Training Loop (DDP)

See train_ddp.py for the full example.

Invoke with:

TORCHFT_LIGHTHOUSE=http://localhost:29510 torchrun --master_port 29501 --nnodes 1 --nproc_per_node 1 train_ddp.py

train.py:

from torchft import Manager, DistributedDataParallel, Optimizer, ProcessGroupGloo

manager = Manager(
    pg=ProcessGroupGloo(),
    load_state_dict=...,
    state_dict=...,
)

m = nn.Linear(2, 3)
m = DistributedDataParallel(manager, m)
optimizer = Optimizer(manager, optim.AdamW(m.parameters()))

for i in range(1000):
    batch = torch.rand(2, 2, device=device)

    optimizer.zero_grad()

    out = m(batch)
    loss = out.sum()

    loss.backward()

    optimizer.step()

Running DDP

After starting the lighthouse server by running:

RUST_BACKTRACE=1 torchft_lighthouse --min_replicas 1 --quorum_tick_ms 100 --join_timeout_ms 10000

A test DDP script can be launched with torchX with:

torchx run

Or Diloco with:

USE_STREAMING=True torchx run ./torchft/torchx.py:hsdp --script='train_diloco.py'

See .torchxconfig, torchx.py and the torchX documentation to understand how DDP is being ran.

torchx.py could also launch HSDP jobs when workers_per_replica is set > 1, if the training script supports it. For an example HSDP training implementation with torchFT enabled, see torchtitan.

Alternatively, to test on a node with two GPUs, you can launch two replica groups running train_ddp.py by:

On shell 1 (one replica groups starts initial training):

export REPLICA_GROUP_ID=0
export NUM_REPLICA_GROUPS=2

CUDA_VISIBLE_DEVICES=0 TORCHFT_LIGHTHOUSE=http://localhost:29510 torchrun --master_port=29600 --nnodes=1 --nproc_per_node=1 -- train_ddp.py

On shell 2 (a second replica group joins):

export REPLICA_GROUP_ID=1
export NUM_REPLICA_GROUPS=2

CUDA_VISIBLE_DEVICES=1 TORCHFT_LIGHTHOUSE=http://localhost:29510 torchrun --master_port=29601 --nnodes=1 --nproc_per_node=1 -- train_ddp.py

By observing the outputs from both shells, you should observe process group reconfiguration and live checkpoint recovery.

Example Parameter Server

torchft has a fault tolerant parameter server implementation built on it's reconfigurable ProcessGroups. This does not require/use a Lighthouse server.

See parameter_server_test.py for an example.

Contributing

We welcome PRs! See the CONTRIBUTING file.

License

torchft is BSD 3-Clause licensed. See LICENSE for more details.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

torchft_nightly-2026.4.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.5 MB view details)

Uploaded CPython 3.13manylinux: glibc 2.17+ x86-64

torchft_nightly-2026.4.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.5 MB view details)

Uploaded CPython 3.12manylinux: glibc 2.17+ x86-64

torchft_nightly-2026.4.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.5 MB view details)

Uploaded CPython 3.11manylinux: glibc 2.17+ x86-64

torchft_nightly-2026.4.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.5 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ x86-64

torchft_nightly-2026.4.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.5 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ x86-64

File details

Details for the file torchft_nightly-2026.4.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for torchft_nightly-2026.4.4-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 de796a12e5411e97f9b8f5bc79f466b4f1b5de746d33f840a1106f7d73c8a0f5
MD5 c7e3aac491c0ba20f000b2c15b89a299
BLAKE2b-256 0721de7a24214dca4b34fefc2f65ba9e35c4f95a72575afea75dabb62c9ce41d

See more details on using hashes here.

File details

Details for the file torchft_nightly-2026.4.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for torchft_nightly-2026.4.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 2d5715bb49d91ddc43e37cb45de15668dcd988e779cee5231aa1cea732be904a
MD5 78913c93d7cf75e2b5f9177c23334a2b
BLAKE2b-256 a187f7d2330b5facc0c6409911b7bcd8490ad7e8bc846ac768772e9011578c31

See more details on using hashes here.

File details

Details for the file torchft_nightly-2026.4.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for torchft_nightly-2026.4.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e7178e3f6ce3c5beba8070f9fe66027ce4eb1d15d22d368400f5a1feb56a6317
MD5 1912156097903e9d8f6dc51e22ed2bdf
BLAKE2b-256 1861c14a05d8b89bd92e094ea4175f087a2d212962c6ad095c54ff308b7853c7

See more details on using hashes here.

File details

Details for the file torchft_nightly-2026.4.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for torchft_nightly-2026.4.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 535b1815f9fe1be24d99e1b40d4c9bc323536d0efcb4a3f5f18c5f9da3e82551
MD5 e87afa658bf6fa1959f2df6f504bd0a0
BLAKE2b-256 22ab11cfcacfcdabe49886301966d1537fe28402da703680d9f5bae70c00c191

See more details on using hashes here.

File details

Details for the file torchft_nightly-2026.4.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for torchft_nightly-2026.4.4-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 fb858a9b3a888a8d592a259f99ad20c90f25841c8dd2881d1b12314a6c19262c
MD5 c78def5826dbef4b1c6f2202b3781dcc
BLAKE2b-256 1ada3bad6f8ceec5912f02b307be0cf03e91b152ee24f1673d41d6588fcb2b3f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page