Skip to main content

Monarch: Single controller library

Project description

Monarch 🦋

Monarch is a distributed programming framework for PyTorch based on scalable actor messaging. It provides:

  1. Remote actors with scalable messaging: Actors are grouped into collections called meshes and messages can be broadcast to all members.
  2. Fault tolerance through supervision trees: Actors and processes form a tree and failures propagate up the tree, providing good default error behavior and enabling fine-grained fault recovery.
  3. Point-to-point RDMA transfers: cheap registration of any GPU or CPU memory in a process, with the one-sided transfers based on libibverbs
  4. Distributed tensors: actors can work with tensor objects sharded across processes

Monarch code imperatively describes how to create processes and actors using a simple python API:

from monarch.actor import Actor, endpoint, this_host

# spawn 8 trainer processes one for each gpu
training_procs = this_host().spawn_procs({"gpus": 8})


# define the actor to run on each process
class Trainer(Actor):
    @endpoint
    def train(self, step: int): ...


# create the trainers
trainers = training_procs.spawn("trainers", Trainer)

# tell all the trainers to take a step
fut = trainers.train.call(step=0)

# wait for all trainers to complete
fut.get()

The introduction to monarch concepts provides an introduction to using these features.

⚠️ Early Development Warning Monarch is currently in an experimental stage. You should expect bugs, incomplete features, and APIs that may change in future versions. The project welcomes bugfixes, but to make sure things are well coordinated you should discuss any significant change before starting the work. It's recommended that you signal your intention to contribute in the issue tracker, either by filing a new issue or by claiming an existing one.

📖 Documentation

View Monarch's hosted documentation at this link.

Installation

Installing from Pre-built Wheels

Monarch provides pre-built wheels that work regardless of what version of PyTorch you have installed:

Stable

pip install torchmonarch

Nightly

pip install --pre torchmonarch

Or install a specific nightly version:

pip install torchmonarch==0.3.0.dev20260106

Build and Install from Source

Note: Building from source requires additional system dependencies. These are needed at build time only, not at runtime.

Monarch uses uv for fast, reliable Python package management. If you don't have uv installed:

# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Or on macOS
brew install uv

Configuring PyTorch Index: By default, Monarch builds with PyTorch from the pytorch-cu128 index (CUDA 12.8). To use a different CUDA version:

  • Edit [tool.uv.sources] in pyproject.toml to point to a different index (e.g., pytorch-cu126, pytorch-cu130, or pytorch-cpu)
  • Or use --extra-index-url when running uv:
    uv sync --extra-index-url https://download.pytorch.org/whl/cu126
    

Understanding Tensor Engine

Monarch includes distributed tensor and RDMA APIs. The tensor engine builds on any platform, including CPU-only hosts and macOS; GPU-specific pieces (NCCL, RDMA, rdmaxcel) layer on top and only build on Linux with a CUDA or ROCm toolchain installed. If you want a lighter-weight version of Monarch (actors only, no torch dependency), set USE_TENSOR_ENGINE=0.

By default, Monarch builds with tensor_engine enabled. To build without it:

USE_TENSOR_ENGINE=0 uv sync

Note: Building without tensor_engine means you won't have access to the distributed tensor or RDMA APIs. Torch is required to use tensor_engine, and the latest stable torch is ABI compatible with the latest versioned torchmonarch

Selecting a GPU platform: The MONARCH_GPU_PLATFORM environment variable controls which GPU libraries the build links against. It accepts:

  • cuda — build against CUDA (NCCL + RDMA).
  • rocm — build against ROCm.
  • none — force a CPU-only tensor engine even on a host where CUDA or ROCm is installed.

Leaving it unset auto-detects whichever toolchain is present. Setting it explicitly is required when both CUDA and ROCm are installed, and none is the explicit opt-out when you want the CPU tensor engine on a GPU-capable host.

# Force a CPU-only tensor engine (no CUDA/ROCm/RDMA libraries required)
MONARCH_GPU_PLATFORM=none uv sync

# Force CUDA on a host that also has ROCm
MONARCH_GPU_PLATFORM=cuda uv sync

Build Dependencies by Platform

On Fedora distributions
# Install nightly rust toolchain
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
rustup toolchain install nightly
rustup default nightly

# Install non-python dependencies
sudo dnf install -y cmake ninja-build protobuf-compiler libunwind

# Install the correct cuda and cuda-toolkit versions for your machine
sudo dnf install cuda-toolkit-12-8 cuda-12-8

# Install clang-devel, nccl-devel, and libstdc++-static
sudo dnf install clang-devel libnccl-devel libstdc++-static

# Install RDMA libraries (needed for tensor_engine builds)
sudo dnf install -y libibverbs rdma-core libmlx5 libibverbs-devel rdma-core-devel

# Clone and sync dependencies
git clone https://github.com/meta-pytorch/monarch.git
cd monarch

# Install in development mode with all dependencies
uv sync

# Or install without tensor_engine
USE_TENSOR_ENGINE=0 uv sync

# Verify installation
uv run python -c "from monarch import actor; print('Monarch installed successfully')"

# Rebuild (e.g., after changing Rust code)
USE_TENSOR_ENGINE=0 uv pip install -e .
On Ubuntu distributions
# Install nightly rust toolchain
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
rustup toolchain install nightly
rustup default nightly

# Install Ubuntu-specific system dependencies
sudo apt install -y cmake ninja-build protobuf-compiler libunwind-dev clang

# Set clang as the default C/C++ compiler
export CC=clang
export CXX=clang++

# Install the correct cuda and cuda-toolkit versions for your machine
sudo apt install -y cuda-toolkit-12-8 cuda-12-8

# Install RDMA libraries (needed for tensor_engine builds)
sudo apt install -y rdma-core libibverbs1 libmlx5-1 libibverbs-dev

# Clone and sync dependencies
git clone https://github.com/meta-pytorch/monarch.git
cd monarch

# Install in development mode with all dependencies
uv sync

# Or install without tensor_engine (CPU-only)
USE_TENSOR_ENGINE=0 uv sync

# Verify installation
uv run python -c "from monarch import actor; print('Monarch installed successfully')"

# Rebuild (e.g., after changing Rust code)
USE_TENSOR_ENGINE=0 uv pip install -e .
On non-CUDA machines

You can also build Monarch on non-CUDA machines (e.g., macOS laptops) for CPU-only usage. The tensor engine itself works on CPU; only the GPU-specific bits (NCCL, RDMA, rdmaxcel) are skipped. Auto-detection handles hosts with no CUDA or ROCm installed. If your host does have a GPU toolchain installed but you want the CPU tensor engine anyway, set MONARCH_GPU_PLATFORM=none.

# Install nightly rust toolchain
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
rustup toolchain install nightly
rustup default nightly

# Clone and sync dependencies
git clone https://github.com/meta-pytorch/monarch.git
cd monarch

# Build the CPU tensor engine (auto-detects no GPU)
uv sync

# Or, to skip the tensor engine entirely (actors only, no torch required)
USE_TENSOR_ENGINE=0 uv sync

# Verify installation
uv run python -c "from monarch import actor; print('Monarch installed successfully')"

Alternative: Using pip

If you prefer to use pip instead of uv:

# After installing system dependencies (see above)

# Install build dependencies

# Build and install Monarch
pip install .

# Or for development
pip install -e .

# Without tensor_engine
USE_TENSOR_ENGINE=0 pip install -e .

Running examples

Check out the examples/ directory for demonstrations of how to use Monarch's APIs.

We'll be adding more examples as we stabilize and polish functionality!

Running tests

We have both Rust and Python unit tests. Rust tests are run with cargo-nextest and Python tests are run with pytest.

Rust tests

Important: Monarch's Rust code uses PyO3 to interface with Python, which means the Rust binaries need to link against Python libraries. Before running Rust tests, you need to have a Python environment activated (conda, venv, or uv):

# If using uv (recommended)
uv sync  # This creates and activates a virtual environment
uv run cargo nextest run  # Run tests within the uv environment

# Or if using conda
conda activate monarchenv
cargo nextest run

# Or if using venv
source .venv/bin/activate
cargo nextest run

Without an active Python environment, you'll get Python linking errors like:

error: could not find native static library `python3.12`, perhaps an -L flag is missing?

Installing cargo-nextest:

# We use cargo-nextest to run our tests, as they provide strong process isolation
# between every test.
# Here we install it from source, but you can instead use a pre-built binary described
# here: https://nexte.st/docs/installation/pre-built-binaries/
cargo install cargo-nextest --locked

cargo-nextest supports all of the filtering flags of "cargo test".

Python tests

# Install test dependencies (if not already installed via uv sync)
uv sync --extra test

# Run unit tests with uv
uv run pytest python/tests/ -v -m "not oss_skip"

# Or if using pip
pip install -e '.[test]'
pytest python/tests/ -v -m "not oss_skip"

Disabling flaky CI tests

If a test is consistently failing in OSS CI and needs to be temporarily disabled without a code change, open a GitHub issue on this repo with a title of the form:

DISABLED <test-name>

At the start of each CI run, scripts/fetch_disabled_tests.py fetches all open issues whose titles start with DISABLED and skips the named tests. Closing the issue re-enables the test on the next run.

Naming format:

  • Rust (cargo nextest): use the test name exactly as it appears in nextest output: <binary> <module::path::test_fn>, e.g. DISABLED hyperactor proc::tests::test_child_lifecycle
  • Python (pytest): use the test function name, e.g. DISABLED test_my_function

Overriding skips locally

To run a test that is currently disabled via a GitHub issue, you can override the fetched skip lists by creating the files before running scripts/fetch_disabled_tests.py. The script will not overwrite files that already exist:

  • disabled_tests.txt — controls which Python tests are skipped. Create this file with only the tests you want to skip (or leave it empty to skip none).
  • .config/nextest-filter.txt — controls which Rust tests are skipped. Write a nextest filter expression here (e.g. all() to run all tests, or not (test(some_test)) to skip only specific ones).

For example, to run all tests locally regardless of open issues:

echo -n "" > disabled_tests.txt
echo "all()" > .config/nextest-filter.txt
uv run python scripts/fetch_disabled_tests.py   # will skip both writes
uv run pytest python/tests/ -v -m "not oss_skip"

License

Monarch is BSD-3 licensed, as found in the LICENSE file.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

torchmonarch-0.5.0rc1-cp313-cp313-manylinux2014_x86_64.whl (79.4 MB view details)

Uploaded CPython 3.13

torchmonarch-0.5.0rc1-cp313-cp313-manylinux2014_aarch64.whl (83.6 MB view details)

Uploaded CPython 3.13

torchmonarch-0.5.0rc1-cp312-cp312-manylinux2014_x86_64.whl (79.4 MB view details)

Uploaded CPython 3.12

torchmonarch-0.5.0rc1-cp312-cp312-manylinux2014_aarch64.whl (83.6 MB view details)

Uploaded CPython 3.12

torchmonarch-0.5.0rc1-cp311-cp311-manylinux2014_x86_64.whl (79.4 MB view details)

Uploaded CPython 3.11

torchmonarch-0.5.0rc1-cp311-cp311-manylinux2014_aarch64.whl (83.5 MB view details)

Uploaded CPython 3.11

torchmonarch-0.5.0rc1-cp310-cp310-manylinux2014_x86_64.whl (79.4 MB view details)

Uploaded CPython 3.10

torchmonarch-0.5.0rc1-cp310-cp310-manylinux2014_aarch64.whl (83.5 MB view details)

Uploaded CPython 3.10

File details

Details for the file torchmonarch-0.5.0rc1-cp313-cp313-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for torchmonarch-0.5.0rc1-cp313-cp313-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 41252af03eab0dc0ab069abb167c9e4ad23ee358d9eefefad3d402ab50ad08bd
MD5 eacc9a93b9633ffc4d742b06bc8ef3f9
BLAKE2b-256 a8a3238d8084ffa7786551f06daf9806009e4d0199e37e033101d96926adbfc2

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchmonarch-0.5.0rc1-cp313-cp313-manylinux2014_x86_64.whl:

Publisher: publish_release.yml on meta-pytorch/monarch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file torchmonarch-0.5.0rc1-cp313-cp313-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for torchmonarch-0.5.0rc1-cp313-cp313-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 df52c6300a4ef17f4b7b89e43be52d5c3de06359533e2e889abcb1aafda86581
MD5 5fa6bf87d23df6bdaf3b5998c30c2737
BLAKE2b-256 e6f5bc4677ea4331e0147b384b358adbf2021f2017c94d18ba8964c6a5328b9b

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchmonarch-0.5.0rc1-cp313-cp313-manylinux2014_aarch64.whl:

Publisher: publish_release.yml on meta-pytorch/monarch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file torchmonarch-0.5.0rc1-cp312-cp312-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for torchmonarch-0.5.0rc1-cp312-cp312-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 cc2ce3d0bafce7a1779d953b76672db57f4431d4962c1d184df131428a5be10c
MD5 0add502e5053f1abc35a9addb1f5eeba
BLAKE2b-256 59fbcaa77d9c6d91c235b9f3b712875cc7fa2acb2726325f104879a54890fe5f

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchmonarch-0.5.0rc1-cp312-cp312-manylinux2014_x86_64.whl:

Publisher: publish_release.yml on meta-pytorch/monarch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file torchmonarch-0.5.0rc1-cp312-cp312-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for torchmonarch-0.5.0rc1-cp312-cp312-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 aec69cbbea7a0d33a18c68fb22c669309b3a0e93bd04baa3af8ea1be1eaeed7b
MD5 3d97460d7f57bb5333b1b884f7e32f2c
BLAKE2b-256 10663216dede88bc9131e134f1c5c3c5ee153e88ea99709c2a37d1441912b85c

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchmonarch-0.5.0rc1-cp312-cp312-manylinux2014_aarch64.whl:

Publisher: publish_release.yml on meta-pytorch/monarch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file torchmonarch-0.5.0rc1-cp311-cp311-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for torchmonarch-0.5.0rc1-cp311-cp311-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 f9788b19aacb514f126e6c71934a2f32c6c11a68cf01296f19b7cd4007906a49
MD5 5e12af1e44bb24d50219c426b2fe9a9e
BLAKE2b-256 37e235d73788abbbacb0d569b21a2d5c616528e4c0f59823be068004ed4be311

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchmonarch-0.5.0rc1-cp311-cp311-manylinux2014_x86_64.whl:

Publisher: publish_release.yml on meta-pytorch/monarch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file torchmonarch-0.5.0rc1-cp311-cp311-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for torchmonarch-0.5.0rc1-cp311-cp311-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 147550a4cd3bece8fc1475f2f0bb037bf3f2a617474d0692824f2c41e9acf98a
MD5 0e05410277dee75b71aabf2c63364e02
BLAKE2b-256 fc00db66aff357b66cbcca0fa578ea0115a70bce6c05cb6310b432d7f96a530f

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchmonarch-0.5.0rc1-cp311-cp311-manylinux2014_aarch64.whl:

Publisher: publish_release.yml on meta-pytorch/monarch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file torchmonarch-0.5.0rc1-cp310-cp310-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for torchmonarch-0.5.0rc1-cp310-cp310-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 e80f51150d3d97d180e65af52c1f6d471c739f53553f6eec8cee3c203a2c9fbb
MD5 1fe4987a6aa751fc726a066e7fa96380
BLAKE2b-256 5e4752beb9da5a0909851ac54e8352dfb48fcd1be711d3a827762e4a53551cd0

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchmonarch-0.5.0rc1-cp310-cp310-manylinux2014_x86_64.whl:

Publisher: publish_release.yml on meta-pytorch/monarch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file torchmonarch-0.5.0rc1-cp310-cp310-manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for torchmonarch-0.5.0rc1-cp310-cp310-manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 e95edb2cf45087c5ba535da5644367be516896cd2fe9cc0d592f03e39df97d6b
MD5 a5368e3e89bd464568528604222cc081
BLAKE2b-256 22c7a6d66d777783f5e3ab1ec7fca0aea0969523ac980d4156c74754be0195f0

See more details on using hashes here.

Provenance

The following attestation bundles were made for torchmonarch-0.5.0rc1-cp310-cp310-manylinux2014_aarch64.whl:

Publisher: publish_release.yml on meta-pytorch/monarch

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page