Decentralized function memoization over iroh P2P
Project description
irohds
A drop-in Python decorator that caches function results and shares them automatically across every machine running the same code. No servers to manage, no configuration, no accounts.
If someone at another institution already computed train_model("cifar10", epochs=50), your machine downloads the result instead of spending hours
recomputing it. If nobody has computed it yet, your machine does the work
and makes the result available to everyone else.
import irohds
@irohds.memo
def train_model(dataset, epochs=10):
... # hours of GPU time
return model
result = train_model("cifar10", epochs=50)
# First run: computes (hours). Every subsequent run, on any peer: instant.
Who is this for
Research groups and institutions that repeatedly run expensive computations across many machines. If your lab has 20 people who all run the same preprocessing pipeline on the same datasets, irohds means only the first person waits. Everyone else gets the result in seconds.
Works across institutions, across continents, across networks. Peers find each other through the BitTorrent mainline DHT (16M+ nodes). No central server, no coordinator, no shared filesystem required.
What it is not for
Functions that complete in under 15 seconds. The network overhead of
sharing results only pays off for genuinely expensive computations.
For fast functions, use functools.cache, joblib.Memory, or
diskcache. irohds will warn you if a decorated function is too cheap
to benefit from network sharing.
Install
uv add irohds
This installs the Python package, the Rust daemon binary, and the coren machine capability library. The daemon starts automatically on first use and installs itself as a system service (starts at boot, runs in a sandbox).
Usage
import irohds
# Basic: share results with all peers globally
@irohds.memo
def expensive_etl(dataset_path):
...
return processed_data
# Namespaced: only share with peers using the same namespace
@irohds.memo(ns="my-lab")
def train(config):
...
# Large file outputs
@irohds.memo
def generate_embeddings(corpus):
...
torch.save(embeddings, irohds.resolve("embeddings.pt"))
return irohds.FileRef("embeddings.pt")
ref = generate_embeddings("pubmed-2024")
embeddings = torch.load(ref.path) # file is on disk, ready to use
# Selective eviction
irohds.evict("mymodule.train") # clear cached results for one function
# Pre-warm peer discovery (optional, reduces first-call latency)
irohds.join("my-lab")
How it works
On the first call: irohds hashes the function's AST and arguments into a cache key, executes the function, stores the result in a local content-addressed blob store, and announces it to peers via gossip.
On subsequent calls (same machine): the result is returned from an in-process dict (~0.1us) or from the local blob store via IPC (~0.2ms). No network involved.
On a different machine: irohds checks whether any peer has the result. If yes, it uses coren to decide whether downloading is faster than recomputing locally (based on the function's compute cost and this machine's capabilities). Then it either fetches the result or recomputes, whichever is faster.
Peer discovery is automatic via three mechanisms:
- Mainline DHT (global, zero config, 16M+ nodes)
- mDNS (automatic on LAN)
- Bootstrap peers (fallback for networks that block DHT)
The daemon (irohds-daemon) is a sandboxed Rust process that owns
the blob store and handles gossip/P2P networking. It installs as a
system service on first use. Python communicates with it over a Unix
socket. The sandbox ensures iroh network traffic cannot access the host
filesystem beyond the irohds data directory.
Restricted networks
If mainline DHT is blocked (some universities, corporate networks),
add known peers to ~/.local/share/irohds/config.toml:
bootstrap_peers = ["<hex-encoded-node-id>"]
Get a peer's node ID with irohds-daemon info.
Performance
| Scenario | Latency |
|---|---|
| Repeated call, same process | ~0.1us (in-process dict) |
| First call after process start, data local | ~0.2ms (one IPC round-trip) |
| First call after daemon restart, data local | ~1ms (load index + IPC) |
| Result available from remote peer | seconds (network transfer) |
| Full miss, compute locally | depends on function |
Developing
cargo build --manifest-path daemon/Cargo.toml # build the daemon
make test # Rust + Python tests
make test-vm # NixOS QEMU P2P integration test
License
MIT
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file irohds-0.3.0.tar.gz.
File metadata
- Download URL: irohds-0.3.0.tar.gz
- Upload date:
- Size: 67.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4937291792c210be18cb5d940dec291997c135e9ee7baafe8590372dccc2f418
|
|
| MD5 |
1a167ea0d8e9f2f7db3475386a3b12c5
|
|
| BLAKE2b-256 |
2bc32546ed2023a0d5df013e0320df8ad226ae2c0af5bc8a2293a5ea39d6e1e0
|
File details
Details for the file irohds-0.3.0-py3-none-macosx_11_0_arm64.whl.
File metadata
- Download URL: irohds-0.3.0-py3-none-macosx_11_0_arm64.whl
- Upload date:
- Size: 10.1 MB
- Tags: Python 3, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
83c4c6a8af08e873c45943e59b746cdf6be9b59e9eb3d1ba0569e777284f7f19
|
|
| MD5 |
0a7cc2fbe611266334567c7a779f85fb
|
|
| BLAKE2b-256 |
1259537ce9236dda5cc441572a8ef2da75ab0e46f8d2de680e571a53204ea9d9
|