SDK and CLI for coordinating LLM agent teams with content-addressed context capsules and reproducible run reports.
Project description
coordpy
Python SDK and CLI for coordinating teams of LLM agents with content-addressed, lifecycle-bounded context objects ("capsules"), plus a reproducible run/report contract.
PyPI: coordpy-ai · import: coordpy
Overview
Multi-agent stacks usually pass context around as raw prompts and
JSON. That works until something breaks and you can't reconstruct
what each agent actually saw. coordpy treats context as typed
objects with content-derived IDs, declared parents, byte budgets,
and a fixed lifecycle. A run produces a RunReport whose root is
a sealed capsule DAG written to disk alongside a provenance
manifest, and you can re-verify the whole thing from the bytes
later.
Install
Requires Python 3.10 or newer.
pip install coordpy-ai
Verify:
coordpy --version # coordpy 0.5.16 (coordpy.sdk.v3.43)
python -c "import coordpy; print(coordpy.__version__)"
The first parenthetical (coordpy.sdk.v3.43) is the
research-line tag exposed at coordpy.SDK_VERSION. It tracks
the underlying research programme and is independent of the
PyPI version.
The only required dependency is NumPy. Optional extras:
| Extra | Pulls in | When you want it |
|---|---|---|
[scientific] |
scipy, networkx |
numerical / graph helpers |
[crypto] |
cryptography |
optional signed-capsule paths |
[dl] |
torch, peft |
the deep-learning research path |
[heavy] |
hnswlib, transformers, RestrictedPython |
full research stack (heavy) |
[docker] |
docker |
Docker-backed sandbox |
[dev] |
ruff, black, mypy, pytest, build, twine |
contributing |
Quickstart
import coordpy
report = coordpy.run(coordpy.RunSpec(
profile="local_smoke",
out_dir="/tmp/cp-smoke",
))
assert report["readiness"]["ready"]
assert report["provenance"]["schema"] == "coordpy.provenance.v1"
assert report["capsules"]["chain_ok"]
print(report["capsules"]["root_cid"])
coordpy.run writes seven files into out_dir. The two you
will reach for most are product_report.json (the same shape
as the returned dict) and capsule_view.json (the sealed
capsule chain that coordpy-capsule verify re-hashes); the
others (provenance.json, meta_manifest.json,
readiness_verdict.json, product_summary.txt,
sweep_result.json) are always written and are useful for
audit. The root_cid is the SHA-256 of the run's RUN_REPORT
capsule; it is stable for a given input but differs between
runs because provenance includes a wall-clock timestamp.
Console scripts
| Command | Purpose |
|---|---|
coordpy --profile <name> --out-dir <dir> |
Run a profile end to end and write the seven artefacts. |
coordpy-ci --report <product_report.json> |
Apply the CI pass/fail gate to a finished report. |
coordpy-capsule view --report ... |
Summarise the capsule graph. |
coordpy-capsule verify --report ... |
Re-hash the capsule chain end to end. |
coordpy-import --jsonl <file> |
Audit a SWE-bench-Lite-style JSONL for compatibility. |
A typical chain:
coordpy --profile local_smoke --out-dir /tmp/cp-smoke
coordpy-ci --report /tmp/cp-smoke/product_report.json --min-pass-at-1 1.0
coordpy-capsule view --report /tmp/cp-smoke/product_report.json
coordpy-capsule verify --report /tmp/cp-smoke/product_report.json
To exercise coordpy-import against the bundled mini fixture
(no external file required):
FIXTURE=$(python -c 'import coordpy, os; print(os.path.join(os.path.dirname(coordpy.__file__), "_internal/tasks/data/swe_real_shape_mini.jsonl"))')
coordpy-import --jsonl "$FIXTURE" --out /tmp/audit.json
Agent teams
AgentTeam.from_env reads its backend from COORDPY_*
environment variables and requires a configured backend to
run — either a reachable Ollama server or an OpenAI-compatible
API key. To run a team without a network, see the
SyntheticLLMClient example below.
from coordpy import AgentTeam, agent
team = AgentTeam.from_env(
[
agent("planner", "Break the task into 2-3 concrete steps."),
agent("researcher", "Gather the facts that matter."),
agent("writer", "Write the final answer for the user."),
],
model="gpt-4o-mini",
backend_name="openai",
team_instructions=(
"Reuse visible handoffs instead of restating the task."
),
)
result = team.run("Explain what coordpy does.")
print(result.final_output)
Local Ollama:
export COORDPY_BACKEND=ollama
export COORDPY_MODEL=qwen2.5:0.5b
export COORDPY_OLLAMA_URL=http://localhost:11434
OpenAI-compatible provider:
export COORDPY_BACKEND=openai
export COORDPY_MODEL=gpt-4o-mini
export COORDPY_API_KEY=...
# Optional, for non-default providers:
# export COORDPY_API_BASE_URL=https://your-provider.example/v1
To run a team without a network or an API key, pass a
SyntheticLLMClient directly:
from coordpy import create_team, agent
from coordpy.synthetic_llm import SyntheticLLMClient
team = create_team(
[agent("planner", "..."), agent("writer", "...")],
backend=SyntheticLLMClient(default_response="ok"),
)
print(team.run("hi").final_output)
examples/build_with_coordpy.py
is an eight-step demo that drives every public layer this way.
Public surface
| Surface | Stability |
|---|---|
coordpy SDK: RunSpec, run, RunReport, SweepSpec, run_sweep, CoordPyConfig, Agent, AgentTeam, agent, create_team, profiles, report, ci_gate, import_data, extensions, capsule primitives, schema constants, OpenAICompatibleBackend, OllamaBackend, backend_from_env |
Stable |
Console scripts: coordpy, coordpy-import, coordpy-ci, coordpy-capsule |
Stable |
On-disk schemas: coordpy.capsule_view.v1, coordpy.provenance.v1, phase45.product_report.v2 |
Stable |
coordpy.__experimental__ (a tuple of names exported under that attribute): research-grade trust-adjudication primitives and the multi-agent coordination ladder behind the research papers |
Experimental, may move or disappear between releases |
The experimental surface ships in the same wheel for
reproducibility and audit. Pin against
coordpy.__experimental__ if you depend on it.
Limitations
coordpyworks at the capsule layer. It does not provide transformer-internal trust transfer or hidden-state access.- The bundled cross-host evidence comes from the small two-node lab where it was generated. Behaviour at larger scales has not been measured.
- Not peer-reviewed. The code, tests, results notes, and theorem registry are public so they can be challenged.
Where to go next
- Contributing:
CONTRIBUTING.md - Releasing to PyPI:
RELEASING.md - Security policy:
SECURITY.md - Changelog:
CHANGELOG.md
License
MIT. See LICENSE.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file coordpy_ai-0.5.16.tar.gz.
File metadata
- Download URL: coordpy_ai-0.5.16.tar.gz
- Upload date:
- Size: 825.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cd40f7f355ee096b668838381a92bcd404fcd4b1c697c2e4f47462f82d42c3fc
|
|
| MD5 |
cacd4e553de7b490b4338e988b80bf3a
|
|
| BLAKE2b-256 |
6e859903cc624c324028f66d15c325894842eb2632da234f20e87b0eca89e848
|
Provenance
The following attestation bundles were made for coordpy_ai-0.5.16.tar.gz:
Publisher:
release.yml on adotdong29/context-zero
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
coordpy_ai-0.5.16.tar.gz -
Subject digest:
cd40f7f355ee096b668838381a92bcd404fcd4b1c697c2e4f47462f82d42c3fc - Sigstore transparency entry: 1450420395
- Sigstore integration time:
-
Permalink:
adotdong29/context-zero@dc5cd33d09aeac32b7f09030a0f7d62a54ecd0fb -
Branch / Tag:
refs/heads/release/v0.5.16 - Owner: https://github.com/adotdong29
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@dc5cd33d09aeac32b7f09030a0f7d62a54ecd0fb -
Trigger Event:
push
-
Statement type:
File details
Details for the file coordpy_ai-0.5.16-py3-none-any.whl.
File metadata
- Download URL: coordpy_ai-0.5.16-py3-none-any.whl
- Upload date:
- Size: 770.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8f883b5e24d8356ced9c520781d10836967f1cda23043f8a7c8ea7b5afdec153
|
|
| MD5 |
1f022640e9ef840e58f2daf2eb279f8e
|
|
| BLAKE2b-256 |
98a1d428f1501794c3f04f2554eea659f91705ce86ff2b3786644f7031f5dc55
|
Provenance
The following attestation bundles were made for coordpy_ai-0.5.16-py3-none-any.whl:
Publisher:
release.yml on adotdong29/context-zero
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
coordpy_ai-0.5.16-py3-none-any.whl -
Subject digest:
8f883b5e24d8356ced9c520781d10836967f1cda23043f8a7c8ea7b5afdec153 - Sigstore transparency entry: 1450420513
- Sigstore integration time:
-
Permalink:
adotdong29/context-zero@dc5cd33d09aeac32b7f09030a0f7d62a54ecd0fb -
Branch / Tag:
refs/heads/release/v0.5.16 - Owner: https://github.com/adotdong29
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@dc5cd33d09aeac32b7f09030a0f7d62a54ecd0fb -
Trigger Event:
push
-
Statement type: