Skip to main content

SDK and CLI for coordinating LLM agent teams with content-addressed context capsules and reproducible run reports.

Project description

coordpy

Python SDK and CLI for coordinating teams of LLM agents with content-addressed, lifecycle-bounded context objects ("capsules"), plus a reproducible run/report contract.

PyPI: coordpy-ai · import: coordpy

Overview

Multi-agent stacks usually pass context around as raw prompts and JSON. That works until something breaks and you can't reconstruct what each agent actually saw. coordpy treats context as typed objects with content-derived IDs, declared parents, byte budgets, and a fixed lifecycle. A run produces a RunReport whose root is a sealed capsule DAG written to disk alongside a provenance manifest, and you can re-verify the whole thing from the bytes later.

Install

Requires Python 3.10 or newer.

pip install coordpy-ai

Verify:

coordpy --version           # prints "coordpy X.Y.Z (coordpy.sdk.v3.43)"
python -c "import coordpy; print(coordpy.__version__)"

The first parenthetical (coordpy.sdk.v3.43) is the research-line tag exposed at coordpy.SDK_VERSION. It tracks the underlying research programme and is independent of the PyPI version.

The only required dependency is NumPy. Optional extras:

Extra Pulls in When you want it
[scientific] scipy, networkx numerical / graph helpers
[crypto] cryptography optional signed-capsule paths
[dl] torch, peft the deep-learning research path
[heavy] hnswlib, transformers, RestrictedPython full research stack (heavy)
[docker] docker Docker-backed sandbox
[dev] ruff, black, mypy, pytest, build, twine contributing

Quickstart

import coordpy

report = coordpy.run(coordpy.RunSpec(
    profile="local_smoke",
    out_dir="/tmp/cp-smoke",
))

assert report["readiness"]["ready"]
assert report["provenance"]["schema"] == "coordpy.provenance.v1"
assert report["capsules"]["chain_ok"]

print(report["capsules"]["root_cid"])

coordpy.run writes seven files into out_dir. The two you will reach for most are product_report.json (the same shape as the returned dict) and capsule_view.json (the sealed capsule chain that coordpy-capsule verify re-hashes); the others (provenance.json, meta_manifest.json, readiness_verdict.json, product_summary.txt, sweep_result.json) are always written and are useful for audit. The root_cid is the SHA-256 of the run's RUN_REPORT capsule; it is stable for a given input but differs between runs because provenance includes a wall-clock timestamp.

Console scripts

Command Purpose
coordpy --profile <name> --out-dir <dir> Run a profile end to end and write the seven artefacts.
coordpy-ci --report <product_report.json> Apply the CI pass/fail gate to a finished report.
coordpy-capsule view --report ... Summarise the capsule graph.
coordpy-capsule verify --report ... Re-hash the capsule chain end to end.
coordpy-import --jsonl <file> Audit a SWE-bench-Lite-style JSONL for compatibility.

A typical chain:

coordpy --profile local_smoke --out-dir /tmp/cp-smoke
coordpy-ci --report /tmp/cp-smoke/product_report.json --min-pass-at-1 1.0
coordpy-capsule view   --report /tmp/cp-smoke/product_report.json
coordpy-capsule verify --report /tmp/cp-smoke/product_report.json

To exercise coordpy-import against the bundled mini fixture (no external file required):

FIXTURE=$(python -c 'import coordpy, os; print(os.path.join(os.path.dirname(coordpy.__file__), "_internal/tasks/data/swe_real_shape_mini.jsonl"))')
coordpy-import --jsonl "$FIXTURE" --out /tmp/audit.json

Agent teams

AgentTeam.from_env reads its backend from COORDPY_* environment variables and requires a configured backend to run — either a reachable Ollama server or an OpenAI-compatible API key. To run a team without a network, see the SyntheticLLMClient example below.

from coordpy import AgentTeam, agent

team = AgentTeam.from_env(
    [
        agent("planner",    "Break the task into 2-3 concrete steps."),
        agent("researcher", "Gather the facts that matter."),
        agent("writer",     "Write the final answer for the user."),
    ],
    model="gpt-4o-mini",
    backend_name="openai",
    team_instructions=(
        "Reuse visible handoffs instead of restating the task."
    ),
)
result = team.run("Explain what coordpy does.")
print(result.final_output)

Local Ollama:

export COORDPY_BACKEND=ollama
export COORDPY_MODEL=qwen2.5:0.5b
export COORDPY_OLLAMA_URL=http://localhost:11434

OpenAI-compatible provider:

export COORDPY_BACKEND=openai
export COORDPY_MODEL=gpt-4o-mini
export COORDPY_API_KEY=...
# Optional, for non-default providers:
# export COORDPY_API_BASE_URL=https://your-provider.example/v1

To run a team without a network or an API key, pass a SyntheticLLMClient directly:

from coordpy import create_team, agent
from coordpy.synthetic_llm import SyntheticLLMClient

team = create_team(
    [agent("planner", "..."), agent("writer", "...")],
    backend=SyntheticLLMClient(default_response="ok"),
)
print(team.run("hi").final_output)

examples/build_with_coordpy.py is an eight-step demo that drives every public layer this way.

Public surface

Surface Stability
coordpy SDK: RunSpec, run, RunReport, SweepSpec, run_sweep, CoordPyConfig, Agent, AgentTeam, agent, create_team, profiles, report, ci_gate, import_data, extensions, capsule primitives, schema constants, OpenAICompatibleBackend, OllamaBackend, backend_from_env Stable
Console scripts: coordpy, coordpy-import, coordpy-ci, coordpy-capsule Stable
On-disk schemas: coordpy.capsule_view.v1, coordpy.provenance.v1, phase45.product_report.v2 Stable
coordpy.__experimental__ (a tuple of names exported under that attribute): research-grade trust-adjudication primitives and the multi-agent coordination ladder behind the research papers Experimental, may move or disappear between releases

The experimental surface ships in the same wheel for reproducibility and audit. Pin against coordpy.__experimental__ if you depend on it.

Limitations

  • coordpy works at the capsule layer. It does not provide transformer-internal trust transfer or hidden-state access.
  • The bundled cross-host evidence comes from the small two-node lab where it was generated. Behaviour at larger scales has not been measured.
  • Not peer-reviewed. The code, tests, results notes, and theorem registry are public so they can be challenged.

Where to go next

License

MIT. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

coordpy_ai-0.5.19.tar.gz (844.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

coordpy_ai-0.5.19-py3-none-any.whl (788.4 kB view details)

Uploaded Python 3

File details

Details for the file coordpy_ai-0.5.19.tar.gz.

File metadata

  • Download URL: coordpy_ai-0.5.19.tar.gz
  • Upload date:
  • Size: 844.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for coordpy_ai-0.5.19.tar.gz
Algorithm Hash digest
SHA256 47327b8c7390ea1f2817f2d90cdf39b264ee8a564797c5f0d8437142749dd3aa
MD5 bdf1b3bf9db05029b64968752bf43dc5
BLAKE2b-256 68ed0b8169aacfd31787967ccbb7bf01ed43d75ce31850215c9e7abdf6251cc2

See more details on using hashes here.

Provenance

The following attestation bundles were made for coordpy_ai-0.5.19.tar.gz:

Publisher: release.yml on adotdong29/context-zero

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file coordpy_ai-0.5.19-py3-none-any.whl.

File metadata

  • Download URL: coordpy_ai-0.5.19-py3-none-any.whl
  • Upload date:
  • Size: 788.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for coordpy_ai-0.5.19-py3-none-any.whl
Algorithm Hash digest
SHA256 bc063780dcd6e320d52844a1cf799556051c1208c09b6fb4be53e1e2db95d368
MD5 b9ebfb26b35762286fae0e3bc649bd6e
BLAKE2b-256 425e9a4f09a82ce096b809310b7fea4ab354108e7febf4c90410fa9e82e371f5

See more details on using hashes here.

Provenance

The following attestation bundles were made for coordpy_ai-0.5.19-py3-none-any.whl:

Publisher: release.yml on adotdong29/context-zero

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page