Skip to main content

fork() for AI agents — snapshot, branch, and merge live agent state.

Project description

ProcessFork

git for AI agents. Snapshot, fork, and merge live LLM sessions in 8 ms.

snapshot a 4-hour Claude Code session in 8 ms, fork into 12 attempts, merge the winner back, push to a registry

60-second demo: pf snapshot → pf fork ×12 → pf merge → pf push file:// → pf clone on a fresh store
Replay it locally: asciinema play demo/processfork-demo.cast

crates.io PyPI npm release MIT

CI 200 tests 8 ms snapshot 88.96% line coverage Rust + Py + TS


Why

You're 4 hours into a refactor with Claude Code. The agent has read 200 files, run 47 tests, opened a database, started a dev server. Then it suggests a destructive change.

Today: lose everything, undo by hand, or restart. With ProcessFork: pf snapshot → 8 ms → safe. Try 12 alternatives in parallel, merge the winner back, ship the whole session to a teammate.

It's git — snapshot, branch, merge, push, clone — but for live AI agent state.

Highlights

  • 8 ms snapshots. Full agent state — model + KV-cache + files + tools + reasoning — into one content-addressed .pfimg.
  • 🌳 Real fork & merge. 12 parallel attempts share storage automatically (CoW). Merge the winner with a real 3-way diff (files, tools, trace) — git-style <<<<<<< markers and all.
  • 🔒 Won't double-send your email. HMAC-chained tool-call ledger; restored agents see prior side-effects as facts, not as actions to re-issue. (ACRFence-resistant.)
  • 🤝 Drop-in for Claude Code, LangGraph, OpenInterpreter, vLLM, SGLang, AutoGen, CrewAI.
  • 📦 Single binary, MIT, Rust core, Python + TypeScript SDKs. 200+ tests.

Quick start (60 seconds)

# install the CLI:
cargo install processfork                      # → `pf` on your $PATH

# snapshot a directory:
mkdir /tmp/sandbox && echo "fn main() {}" > /tmp/sandbox/main.rs
pf snapshot --agent-id demo --fs-root /tmp/sandbox
# → sha256:1c2497b0…   ⏱ 8 ms

# edit something, snapshot again, see the diff:
echo "fn main() { println!(\"hi\") }" > /tmp/sandbox/main.rs
pf snapshot --agent-id demo --fs-root /tmp/sandbox --name v2
pf log
pf diff <first-cid> <second-cid>

Prefer Python? pip install processfork. TypeScript? npm install @processfork/sdk.

The full 60-second demo (snapshot → fork ×12 → merge → push → clone on a fresh store) is bash demo/script.sh. Runs end-to-end on a laptop. No GPU, no API keys.

When you'd reach for it

Situation Command
Agent about to do something destructive pf snapshot pre-rm-rf
Stuck — want to try 12 approaches in parallel pf fork -n 12 --explore "fix bug"
Hand a complex session to a teammate pf push hf://you/session-name
Time-travel debug ("when did it go wrong?") pf log then pf checkout <CID>
RL rollout fabric for agent training snapshot, fan out, score, merge

Use it with your stack

Adapter Status What it gives you
Claude Code ✅ ships v1.0 /snapshot, /fork, /merge slash-commands inside any session
LangGraph ✅ ships v1.0 drop-in BaseCheckpointSaver over the FS+env+trace+effects layers
OpenInterpreter ✅ ships v1.0 interpreter.snapshot("pre-rm-rf") then .checkout("pre-rm-rf")
AutoGen ✅ ships v1.0 atomic FS+env+trace+effects snapshot across an agent group
CrewAI ✅ ships v1.0 CrewMemory drop-in; every step time-travelable
vLLM 🟡 mock ships v1.0 · live = Modal lane mock: K/V page bytes + manifest persist & restore via the SDK; live (Modal A10G): V0 engine bit-exact, V1 engine output-equivalent (see "What does/doesn't ship" below)
SGLang 🟡 mock ships v1.0 · live = Modal lane mock: RadixCache k_buffer/v_buffer page round-trip; live: scaffolded — Modal lane reaches the parity stub but full radix-tree replay is v1.1

How it works

ProcessFork captures the five things that together make up a live agent — atomically — into one content-addressed file. Each layer ships at a different maturity level in v1.0.x:

Layer What it captures v1.0.x status
World Filesystem (full), env (default-redacted), browser DOM (CDP). In-flight subprocesses are not captured by pf snapshot — the procs blob writes a procs.unsupported.v1 placeholder unless a CRIU/zombie-restart adapter is wired in. ✅ FS + env ship; procs = placeholder
Effects Append-only ledger of tool calls, HMAC-chained per entry (ACRFence). ✅ ships (CLI + Python SDK + TS SDK + 5 adapters)
Trace Chat + tool-call message log ✅ ships
Model LoRA / IA³ / full-finetune weight diffs, in-place TTT updates. The format and TIES+DARE merge math ship and are exercised on the Modal A10G lane; the generic CLI snapshot path produces an empty LoRA envelope because the layer is populated by adapters (vLLM/SGLang/etc.), not by walking a directory. 🟡 format ships; CLI path is placeholder; adapter-populated
Cache Paged KV-cache, content-addressed per page (CoW across forks). Same shape: format + page math ship; the generic CLI snapshot produces an empty page manifest; the vLLM/SGLang adapters populate it for real. 🟡 format ships; CLI path is placeholder; adapter-populated

Identical content shares storage automatically — 12 parallel forks use ~1.004× the space of one in the operator's matrix run, well under the < 1.5× budget. The merge engine handles each layer with the right algorithm: git-style 3-way diff for files (conflict markers materialize; resolution UI is v1.1), TIES + DARE for model weights, the HMAC effects chain that defends against semantic-rollback attacks (ACRFence), and an LLM-summarized "what branch B learned" patch injected into branch A's reasoning trace without re-prefilling the cache.

What does and doesn't ship in v1.0.x

Production-credible today (independent retest, 12/12 matrix passing):

  • pf snapshot / pf checkout for filesystem sandboxes, with default secret-shaped env redaction.
  • HMAC-chained effects ledger end-to-end (CLI + Python + TS), tamper detected by pf verify.
  • Fork & merge: 12 forks at ~1.004× storage; clean and conflicting merges produce content-addressed merged CIDs with Git-style markers in conflict files.
  • File:// (and OCI / S3 / HF) registry transport.
  • 5 adapters (Claude Code, LangGraph, OpenInterpreter, AutoGen, CrewAI) over the FS + env + trace + effects layers.
  • vLLM/SGLang mock mode: K/V page bytes + manifest persist into the store and read back on checkout.

Not yet production-ready, though the format and code paths exist:

  • Live in-flight subprocess capture. The world layer's procs blob is a placeholder (procs.unsupported.v1); a CRIU-based adapter is the v1.1 deliverable. Today, restored sessions do not bring back live PIDs; they bring back the FS + env + trace + effects state that lets a fresh worker continue.
  • Local PF_HAS_GPU=1 vLLM/SGLang test (examples/06, examples/07, pf-cache/tests/cache_bit_exact_vllm.rs). These exit 2 with a "use the adapter packages directly + Modal lane" pointer — they were operator-runs-it skeletons that never got a self-contained subprocess flow. The Modal A10G lane (scripts/gpu-validate-modal.py) does run vLLM end-to-end and emits the JSONs in benchmarks/gpu-validation/.
  • Bit-exact KV-cache restore on vLLM V1 engine. The Modal lane shows V0 engine bit_exact: true for 38 619 KV pages but V1 engine = output-equivalent (first-80-chars match), not bit-exact, on TinyLlama-1.1B. V1 is using collective_rpc and the engine has its own non-determinism in deterministic mode that we do not yet eliminate. Treat live V1 KV restore as "lossy semantic restore" today.
  • Conflict-merge resolution UI. The merge engine writes Git-style <<<<<<< markers and emits a merged CID; an interactive pf merge --resolve <cid> flow is v1.1.
  • Generic CLI model/cache layer capture. The generic pf snapshot produces empty model + cache envelopes — these layers are populated through adapters, not by walking a directory. If you want the model+cache layers populated, use the vLLM or SGLang adapter from inside your engine process.

Architecture deep-dive · Three-way merge protocol · Engineering specs

Status

v1.0.11 tagged. Documentation honesty pass after the v1.0.10 retest: the README's "ships now" framing on vLLM/SGLang and the "bit-exact" metric row were not telling the same story as benchmarks/gpu-validation/*.json and the examples/06+07 runners that exit 2 under PF_HAS_GPU=1. This release does not change runtime behavior — the v1.0.10 fixes (TS SDK scrub + HMAC ledger), v1.0.9 fixes (Python SDK scrub + HMAC ledger), and earlier audit-round fixes all stand. What it changes: the adapter status table separates mock from live (Modal lane); the 5-layer table marks Model and Cache as adapter-populated (the generic CLI path emits empty envelopes); a new "What does and doesn't ship in v1.0.x" subsection makes the boundary explicit (no in-flight subprocess capture; no local PF_HAS_GPU=1 self-contained vLLM test; no V1-engine bit-exactness; no conflict-resolution UI). The example runners and the cache_bit_exact_vllm.rs panic message are also updated to point at the actually-true status. cargo deny check: still advisories ok, bans ok, licenses ok, sources ok.

metric observed target
Snapshot p50, synthetic 4-layer fixture (macOS arm64) 7.9 ms < 500 ms p99
Snapshot p50, real GPU host (Modal A10G, 64 × 4 KiB) 42 ms (warm) < 500 ms p99
KV-cache restore, vLLM V0 engine + TinyLlama-1.1B on A10G bit_exact: true — 38 619 KV pages, regenerated text byte-identical (JSON) out_a == out_b byte-equal
KV-cache restore, vLLM V1 engine (collective_rpc) output-equivalent, not bit-exact — first-80-chars match across snapshot/restore on 38 599 KV pages (JSON); bit_exact: false field is the source of truth out_a == out_b byte-equal (target unmet on V1)
Cache capture, 64 pages 531 µs
12-fork ÷ 1-fork storage ratio (auditor's matrix) 1.004× ≤ 1.5×
Total Rust tests passing 199
Python SDK + Claude adapter tests 17
TS SDK smoke tests 8 (incl. 3 v1.0.10 regressions)

Synthetic-fixture numbers come from cargo bench --workspace. GPU numbers come from modal run scripts/gpu-validate-modal.py; raw JSON lives in benchmarks/gpu-validation/ and the breakdown in benchmarks/RESULTS.md. The local PF_HAS_GPU=1 paths in examples/06 and examples/07 are not the validation path — they exit 2 with a Modal-lane pointer; the validation IS the Modal lane, and the JSONs above are its output.

Install

cargo install processfork                          # Rust CLI (the `pf` binary)
pip   install processfork                          # Python SDK
npm   install @processfork/sdk                     # TypeScript SDK

Per-adapter packages (one each on PyPI):

pip install processfork-claude-code
pip install processfork-langgraph
pip install processfork-openinterpreter
pip install "processfork-vllm[vllm]"               # needs CUDA + vllm ≥ 0.10
pip install "processfork-sglang[sglang]"           # needs CUDA + sglang ≥ 0.5
pip install "processfork-autogen[autogen]"
pip install "processfork-crewai[crewai]"

Build from source if you want to hack on it:

git clone https://github.com/manav8498/processfork && cd processfork
cargo build --release -p processfork               # → target/release/pf

Full build-from-source instructions in docs/install.md. Pre-built wheels cover macOS arm64, Linux x86_64, and Linux aarch64; macOS Intel + Windows wheels arrive in v1.0.1 (operator: same package, just more platforms).

Repo layout

crates/      Rust workspace (10 crates: pf-core, pf-cache, pf-world, pf-effects,
             pf-model, pf-merge, pf-registry, processfork (CLI, the `pf` binary), pf-py, pf-ts)
adapters/    7 first-party integration packages
benchmarks/  PFBench harness + Criterion microbench
docs/        mdBook source (25+ pages)
examples/    8 self-contained runnable examples
demo/        60-second demo recording script

Docs

Your first fork (5 min) · 60-second demo · Architecture · Merge protocol · Security model · Performance tuning · Engineering specs

Contributing

PRs welcome. The bar is cargo fmt, cargo clippy --all-targets -- -D warnings, cargo test --workspace, plus a green coverage delta. See CONTRIBUTING.md.

License

MIT.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

processfork-1.0.11-cp39-abi3-win_amd64.whl (1.4 MB view details)

Uploaded CPython 3.9+Windows x86-64

processfork-1.0.11-cp39-abi3-manylinux_2_28_x86_64.whl (1.6 MB view details)

Uploaded CPython 3.9+manylinux: glibc 2.28+ x86-64

processfork-1.0.11-cp39-abi3-manylinux_2_28_aarch64.whl (1.5 MB view details)

Uploaded CPython 3.9+manylinux: glibc 2.28+ ARM64

processfork-1.0.11-cp39-abi3-macosx_11_0_arm64.whl (1.3 MB view details)

Uploaded CPython 3.9+macOS 11.0+ ARM64

processfork-1.0.11-cp39-abi3-macosx_10_12_x86_64.whl (1.5 MB view details)

Uploaded CPython 3.9+macOS 10.12+ x86-64

File details

Details for the file processfork-1.0.11-cp39-abi3-win_amd64.whl.

File metadata

  • Download URL: processfork-1.0.11-cp39-abi3-win_amd64.whl
  • Upload date:
  • Size: 1.4 MB
  • Tags: CPython 3.9+, Windows x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for processfork-1.0.11-cp39-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 b7e4d4b33834662b80aa5a7bd4e7a85a1ff5a34d25cba02b2353c96e3ea82dc4
MD5 08abe863ec6413958a8c2f03ff39305d
BLAKE2b-256 080fffecc52c36771f1b92bdeb06baff8199fc07c04f4a7666eaed5bb16f6ac8

See more details on using hashes here.

Provenance

The following attestation bundles were made for processfork-1.0.11-cp39-abi3-win_amd64.whl:

Publisher: release.yml on manav8498/processfork

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file processfork-1.0.11-cp39-abi3-manylinux_2_28_x86_64.whl.

File metadata

File hashes

Hashes for processfork-1.0.11-cp39-abi3-manylinux_2_28_x86_64.whl
Algorithm Hash digest
SHA256 0f46cb1623ef005d784dedd65538c5b72b7713b66e2bc243a24a22c3eca4de10
MD5 5a8b39c5d83ae97609ecdb5b5b0d3fde
BLAKE2b-256 7fa4d5e84c33cb24db128bff27a8e6403a289578a1ba2cc9772552823d29cd62

See more details on using hashes here.

Provenance

The following attestation bundles were made for processfork-1.0.11-cp39-abi3-manylinux_2_28_x86_64.whl:

Publisher: release.yml on manav8498/processfork

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file processfork-1.0.11-cp39-abi3-manylinux_2_28_aarch64.whl.

File metadata

File hashes

Hashes for processfork-1.0.11-cp39-abi3-manylinux_2_28_aarch64.whl
Algorithm Hash digest
SHA256 72587874f6c4ad5e65b518652c6503f5801abc942c82955ee36f72a3325e96b5
MD5 b563e2e866306d1ef0aec972e13ef6b2
BLAKE2b-256 5aa1403ed7b3a8d97b1959dd90e85469d0bb50d2e650c7d6064a1699b248c63e

See more details on using hashes here.

Provenance

The following attestation bundles were made for processfork-1.0.11-cp39-abi3-manylinux_2_28_aarch64.whl:

Publisher: release.yml on manav8498/processfork

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file processfork-1.0.11-cp39-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for processfork-1.0.11-cp39-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 74d8b7ac952bebb62286858213d6fb14de7615faff68891633c3de73a8ab752d
MD5 5a39e3a15f6bc894d44034d1d6113ca9
BLAKE2b-256 075d4817b41387e347e1acfb02f27b6f69f4bc05c7d668659f20ea335760edcf

See more details on using hashes here.

Provenance

The following attestation bundles were made for processfork-1.0.11-cp39-abi3-macosx_11_0_arm64.whl:

Publisher: release.yml on manav8498/processfork

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file processfork-1.0.11-cp39-abi3-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for processfork-1.0.11-cp39-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 cd8ce4509aa5513e8167cd4e9dcb695381f3e126d9b857d6c3bfd61ef890c04e
MD5 5e2fc5dbae240917760ebc915ac28c1c
BLAKE2b-256 0247bd7bce7ca9fbb14609d47b0d1e067938cc028f864e29e2f79ffe907eba56

See more details on using hashes here.

Provenance

The following attestation bundles were made for processfork-1.0.11-cp39-abi3-macosx_10_12_x86_64.whl:

Publisher: release.yml on manav8498/processfork

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page