Biomimetic wiring diagrams for robust agentic systems.
Project description
Operon 🧬
Biologically inspired architectures for more reliable AI agent systems
From agent heuristics toward structural guarantees.
Operon is a research-grade library and reference implementation for biologically inspired agent control patterns. The API is still evolving.
The Problem: Fragile Agents
Most agent systems fail structurally, not just locally.
A worker can hallucinate and nobody checks it. A sequential chain accumulates handoff cost. A tool-rich workflow becomes harder to route safely than a single-agent baseline. In practice, adding more agents often adds more failure surface unless the wiring is doing real control work.
Operon is a library for making that structure explicit. It gives you pattern-first building blocks like reviewer gates, specialist swarms, skill organisms, and topology advice, while keeping the lower-level wiring and analysis layers available when you need them.
Installation
pip install operon-ai
For provider-backed stages, configure whichever model backend you want to use through the existing Nucleus provider layer.
Start Here: Pattern-First API
If you are new to Operon, start here rather than with the full biological vocabulary.
advise_topology(...)when you want architecture guidancereviewer_gate(...)when you want one worker plus a review bottleneckspecialist_swarm(...)when you want centralized specialist decompositionskill_organism(...)when you want a provider-bound workflow with cheap vs expensive stages and attachable telemetrymanaged_organism(...)when you want the full stack — adaptive assembly, watcher, substrate, development, social learning — in one call
Get topology advice
from operon_ai import advise_topology
advice = advise_topology(
task_shape="sequential",
tool_count=2,
subtask_count=3,
error_tolerance=0.02,
)
print(advice.recommended_pattern) # single_worker_with_reviewer
print(advice.suggested_api) # reviewer_gate(...)
print(advice.rationale)
Add a reviewer gate
from operon_ai import reviewer_gate
gate = reviewer_gate(
executor=lambda prompt: f"EXECUTE: {prompt}",
reviewer=lambda prompt, candidate: "safe" in prompt.lower(),
)
result = gate.run("Deploy safe schema migration")
print(result.allowed)
print(result.output)
Build a skill organism
from operon_ai import MockProvider, Nucleus, SkillStage, TelemetryProbe, skill_organism
fast = Nucleus(provider=MockProvider(responses={
"return a deterministic routing label": "EXECUTE: billing",
}))
deep = Nucleus(provider=MockProvider(responses={
"billing": "EXECUTE: escalate to the billing review workflow",
}))
organism = skill_organism(
stages=[
SkillStage(name="intake", role="Normalizer", handler=lambda task: {"request": task}),
SkillStage(
name="router",
role="Classifier",
instructions="Return a deterministic routing label.",
mode="fixed",
),
SkillStage(
name="planner",
role="Planner",
instructions="Use the routing result to propose the next action.",
mode="fuzzy",
),
],
fast_nucleus=fast,
deep_nucleus=deep,
components=[TelemetryProbe()],
)
result = organism.run("Customer says the refund never posted.")
print(result.final_output)
Drop down a layer when you need to
The pattern layer is additive, not a separate framework. You can still inspect the generated structure and analysis underneath. For a gate returned by reviewer_gate(...):
gate.diagramgate.analysis
For a swarm returned by specialist_swarm(...):
swarm.diagramswarm.analysis
Bi-Temporal Memory
Append-only factual memory with dual time axes (valid time vs record time) for auditable decision-making. Stages can read from and write to a shared BiTemporalMemory substrate, enabling belief-state reconstruction ("what did the organism know when stage X decided?").
from operon_ai import BiTemporalMemory, MockProvider, Nucleus, SkillStage, skill_organism
mem = BiTemporalMemory()
nucleus = Nucleus(provider=MockProvider(responses={}))
organism = skill_organism(
stages=[
SkillStage(
name="research",
role="Researcher",
handler=lambda task: {"risk": "medium", "sector": "fintech"},
emit_output_fact=True, # records output under subject=task
),
SkillStage(
name="strategist",
role="Strategist",
handler=lambda task, state, outputs, stage, view: f"Recommend based on {len(view.facts)} facts",
read_query="Review account acct:1", # must match the task string used as subject
),
],
fast_nucleus=nucleus,
deep_nucleus=nucleus,
substrate=mem,
)
result = organism.run("Review account acct:1")
print(mem.history("Review account acct:1")) # full append-only audit trail
See the Bi-Temporal Memory docs, examples 69–71, and the interactive explorer.
Convergence: Structural Analysis for External Frameworks
The operon_ai.convergence package provides typed adapters for 6 external agent frameworks (Swarms, DeerFlow, AnimaWorks, Ralph, A-Evolve, Scion) into Operon's structural analysis layer. No external dependencies — all operate on plain dicts.
from operon_ai import PatternLibrary
from operon_ai.convergence import (
parse_swarm_topology, analyze_external_topology,
seed_library_from_swarms, get_builtin_swarms_patterns,
)
# Analyze a Swarms workflow with Operon's epistemic theorems
topology = parse_swarm_topology(
"HierarchicalSwarm",
agent_specs=[
{"name": "manager", "role": "Manager"},
{"name": "coder", "role": "Developer"},
{"name": "reviewer", "role": "Reviewer"},
],
edges=[("manager", "coder"), ("manager", "reviewer")],
)
result = analyze_external_topology(topology)
print(result.risk_score, result.warnings)
# Seed a PatternLibrary from Swarms' built-in patterns
library = PatternLibrary()
seed_library_from_swarms(library, get_builtin_swarms_patterns())
Compile organisms into deployment configs for Swarms, DeerFlow, Ralph, and Scion:
from operon_ai.convergence import organism_to_swarms, organism_to_scion
swarms_config = organism_to_swarms(organism)
scion_config = organism_to_scion(organism, runtime="docker")
See examples 86–106 and the Convergence docs.
Learn More
Public docs now live at banu.be/operon. The tracked source for that docs shell lives in the repo under docs/site/.
- Getting Started
- Pattern-First API
- Skill Organisms
- Bi-Temporal Memory
- Convergence
- Examples
- Concepts and Architecture
- Theory and Papers
- API Overview
- Hugging Face Spaces
- Release Notes
Direct links:
- Examples index (107 runnable examples)
- Wiring diagrams (63 architecture diagrams)
- Main whitepaper
- Epistemic topology paper
- PyPI package
- Epistemic Topology Explorer
- Diagram Builder
- Bi-Temporal Memory Explorer
Contributing
Issues and pull requests are welcome. Start with the pattern-first examples, then drop into the lower-level layers only when the problem actually needs them.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file operon_ai-0.26.0.tar.gz.
File metadata
- Download URL: operon_ai-0.26.0.tar.gz
- Upload date:
- Size: 1.7 MB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
133ef572f6bcdbd33df94a73d54a486ddcfe70a03ce768f265bf0bd15eb7bf5f
|
|
| MD5 |
15bf405c62f8465498f837d449f63ce4
|
|
| BLAKE2b-256 |
2ac1f947e42a54837464c4510c39240904032ef2ee2f90c6422507f62a4d0896
|
Provenance
The following attestation bundles were made for operon_ai-0.26.0.tar.gz:
Publisher:
publish.yml on coredipper/operon
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
operon_ai-0.26.0.tar.gz -
Subject digest:
133ef572f6bcdbd33df94a73d54a486ddcfe70a03ce768f265bf0bd15eb7bf5f - Sigstore transparency entry: 1218245038
- Sigstore integration time:
-
Permalink:
coredipper/operon@cfc16a384e3c8a604c4d033e8f1932007670689a -
Branch / Tag:
refs/tags/v0.26.0 - Owner: https://github.com/coredipper
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@cfc16a384e3c8a604c4d033e8f1932007670689a -
Trigger Event:
release
-
Statement type:
File details
Details for the file operon_ai-0.26.0-py3-none-any.whl.
File metadata
- Download URL: operon_ai-0.26.0-py3-none-any.whl
- Upload date:
- Size: 309.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3b2582353d4a376954183f2503587a1573db10b09de413183230eb10802c6753
|
|
| MD5 |
f9578ebe8f7eb4d7a2cce378c6117941
|
|
| BLAKE2b-256 |
b25ffecfb2c2e076700303bf4049ffd5587367da4b38a3edb8df8df07f168ac8
|
Provenance
The following attestation bundles were made for operon_ai-0.26.0-py3-none-any.whl:
Publisher:
publish.yml on coredipper/operon
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
operon_ai-0.26.0-py3-none-any.whl -
Subject digest:
3b2582353d4a376954183f2503587a1573db10b09de413183230eb10802c6753 - Sigstore transparency entry: 1218245093
- Sigstore integration time:
-
Permalink:
coredipper/operon@cfc16a384e3c8a604c4d033e8f1932007670689a -
Branch / Tag:
refs/tags/v0.26.0 - Owner: https://github.com/coredipper
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@cfc16a384e3c8a604c4d033e8f1932007670689a -
Trigger Event:
release
-
Statement type: