Skip to main content

Pool Apple Silicon Macs for distributed compute and ML training

Project description

MacFleet

Pool Apple Silicon Macs into a distributed ML training cluster.

Turn spare MacBooks, Mac Minis, and Mac Studios into one big GPU. MacFleet connects them over Thunderbolt, Ethernet, or WiFi and splits training across all of them automatically.

  macfleet join              macfleet join            macfleet join
 ┌──────────────┐          ┌──────────────┐          ┌──────────────┐
 │  MacBook Pro │◄────────►│  MacBook Air │◄────────►│  Mac Studio  │
 │  M4 Pro      │  WiFi /  │  M4          │  WiFi /  │  M4 Ultra    │
 │  16 GPU cores│  ETH /   │  10 GPU cores│  ETH /   │  60 GPU cores│
 │  48 GB RAM   │  TB4     │  16 GB RAM   │  TB4     │  192 GB RAM  │
 └──────────────┘          └──────────────┘          └──────────────┘
         ▲                          ▲                          ▲
         └──────────────────────────┴──────────────────────────┘
                        Ring AllReduce (gradient sync)

Install

pip install macfleet            # core
pip install macfleet[torch]     # + PyTorch
pip install macfleet[mlx]       # + Apple MLX
pip install macfleet[all]       # everything

Quick Start

1. Join the pool (run on each Mac):

macfleet join

No config files, no IP addresses. Macs find each other automatically via mDNS/Bonjour.

2. Train:

import macfleet
import torch.nn as nn

model = nn.Sequential(nn.Linear(784, 256), nn.ReLU(), nn.Linear(256, 10))

with macfleet.Pool() as pool:
    result = pool.train(model=model, dataset=(X_train, y_train), epochs=10)

Features

  • Dual engine — PyTorch (MPS) and Apple MLX, same pool infrastructure
  • Zero config — mDNS discovery, no coordinator setup, no config files
  • Adaptive compression — auto-selects TopK + FP16 based on link speed (1x–200x reduction)
  • Heterogeneous scheduling — faster Macs get bigger batches, adjusts for thermal throttling
  • Secure by default — auto-generated fleet tokens, HMAC mutual auth, mandatory TLS, gradient validation
  • Framework-agnostic core — communication layer uses only numpy, never imports torch or mlx

Security

Security is enabled by default. The first macfleet join auto-generates a fleet token and saves it to ~/.macfleet/fleet-token:

macfleet join                    # auto-generates token, prints it
macfleet join --token <token>    # join with a specific token (copy from first node)
macfleet join --fleet-id lab     # isolate by fleet name
macfleet join --open             # disable security (not recommended)

What's protected:

  • Fleet isolation — nodes with different tokens are invisible to each other on the network
  • Mutual authentication — HMAC-SHA256 challenge-response on every connection
  • Encryption — TLS enabled automatically (mandatory with auth)
  • Authenticated heartbeat — HMAC-signed liveness probes, replay-resistant
  • Gradient validation — rejects NaN, Inf, and extreme magnitudes (anti-poisoning)

CLI

macfleet join       Join the pool (auto-discovers peers)
macfleet status     Show pool members and network info
macfleet info       Show local hardware profile
macfleet train      Run training (demo or custom script)
macfleet bench      Benchmark compute, network, or allreduce
macfleet diagnose   System health check

How It Works

MacFleet uses data parallelism: every Mac holds a full copy of the model, trains on a weighted portion of the data, and averages gradients via Ring AllReduce after each step.

Network Compression 100 MB gradients become
Thunderbolt 4 None 100 MB
Ethernet TopK 10% + FP16 ~5 MB
WiFi TopK 1% + FP16 ~500 KB

Requirements

  • macOS with Apple Silicon (M1/M2/M3/M4)
  • Python 3.11+
  • PyTorch 2.1+ or MLX 0.5+

Development

git clone https://github.com/vikranthreddimasu/MacFleet.git
cd MacFleet
pip install -e ".[dev,all]"
make test       # 373 tests
make lint       # ruff + mypy

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

macfleet-2.1.1.tar.gz (76.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

macfleet-2.1.1-py3-none-any.whl (92.6 kB view details)

Uploaded Python 3

File details

Details for the file macfleet-2.1.1.tar.gz.

File metadata

  • Download URL: macfleet-2.1.1.tar.gz
  • Upload date:
  • Size: 76.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for macfleet-2.1.1.tar.gz
Algorithm Hash digest
SHA256 f65c4f16bb8787f0baf312b17cdc309001efbd4218eaa471d98a21f09a5388f9
MD5 54ed315307b0e082c7757c530c971ec3
BLAKE2b-256 cf40cbdee0d8043c65134d27e8ebe82c032b7c3f9fb616851d1f3a1d42a1047e

See more details on using hashes here.

File details

Details for the file macfleet-2.1.1-py3-none-any.whl.

File metadata

  • Download URL: macfleet-2.1.1-py3-none-any.whl
  • Upload date:
  • Size: 92.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for macfleet-2.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 faf9612a005069d6528c9577999ea6ff6dc36dbedc0df9a2bc5311c4e537faa5
MD5 fa51477c1b891e6f9028864ed36c7d67
BLAKE2b-256 14842e6fa8b8f3d2ef846987d646f15c485eaf0a0db5ea228da5edb9fa248511

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page