Skip to main content

Biomimetic wiring diagrams for robust agentic systems.

Project description

Operon 🧬

Biologically inspired architectures for more reliable AI agent systems

From agent heuristics toward structural guarantees.

Status Version License Publish to PyPI

Operon is a research-grade library and reference implementation for biologically inspired agent control patterns. The API is still evolving.

The Problem: Fragile Agents

Most agent systems fail structurally, not just locally.

A worker can hallucinate and nobody checks it. A sequential chain accumulates handoff cost. A tool-rich workflow becomes harder to route safely than a single-agent baseline. In practice, adding more agents often adds more failure surface unless the wiring is doing real control work.

Operon is a library for making that structure explicit. It gives you pattern-first building blocks like reviewer gates, specialist swarms, skill organisms, and topology advice, while keeping the lower-level wiring and analysis layers available when you need them.

Installation

pip install operon-ai

For provider-backed stages, configure whichever model backend you want to use through the existing Nucleus provider layer.

Start Here: Pattern-First API

If you are new to Operon, start here rather than with the full biological vocabulary.

  • advise_topology(...) when you want architecture guidance
  • reviewer_gate(...) when you want one worker plus a review bottleneck
  • specialist_swarm(...) when you want centralized specialist decomposition
  • skill_organism(...) when you want a provider-bound workflow with cheap vs expensive stages and attachable telemetry — supports parallel stage groups via stages=[[s1, s2], [s3]]
  • managed_organism(...) when you want the full stack — adaptive assembly, watcher, substrate, development, social learning — in one call

Get topology advice

from operon_ai import advise_topology

advice = advise_topology(
    task_shape="sequential",
    tool_count=2,
    subtask_count=3,
    error_tolerance=0.02,
)

print(advice.recommended_pattern)  # single_worker_with_reviewer
print(advice.suggested_api)        # reviewer_gate(...)
print(advice.rationale)

Add a reviewer gate

from operon_ai import reviewer_gate

gate = reviewer_gate(
    executor=lambda prompt: f"EXECUTE: {prompt}",
    reviewer=lambda prompt, candidate: "safe" in prompt.lower(),
)

result = gate.run("Deploy safe schema migration")
print(result.allowed)
print(result.output)

Build a skill organism

from operon_ai import MockProvider, Nucleus, SkillStage, TelemetryProbe, skill_organism

fast = Nucleus(provider=MockProvider(responses={
    "return a deterministic routing label": "EXECUTE: billing",
}))
deep = Nucleus(provider=MockProvider(responses={
    "billing": "EXECUTE: escalate to the billing review workflow",
}))

organism = skill_organism(
    stages=[
        SkillStage(name="intake", role="Normalizer", handler=lambda task: {"request": task}),
        SkillStage(
            name="router",
            role="Classifier",
            instructions="Return a deterministic routing label.",
            mode="fixed",
        ),
        SkillStage(
            name="planner",
            role="Planner",
            instructions="Use the routing result to propose the next action.",
            mode="fuzzy",
        ),
    ],
    fast_nucleus=fast,
    deep_nucleus=deep,
    components=[TelemetryProbe()],
)

result = organism.run("Customer says the refund never posted.")
print(result.final_output)

Stages can be grouped for parallel execution:

organism = skill_organism(
    stages=[
        [  # These two run concurrently
            SkillStage(name="research_a", role="Researcher", instructions="...", mode="fixed"),
            SkillStage(name="research_b", role="Researcher", instructions="...", mode="fixed"),
        ],
        SkillStage(name="synthesize", role="Writer", instructions="...", mode="fuzzy"),
    ],
    fast_nucleus=fast,
    deep_nucleus=deep,
)

Drop down a layer when you need to

The pattern layer is additive, not a separate framework. You can still inspect the generated structure and analysis underneath. For a gate returned by reviewer_gate(...):

  • gate.diagram
  • gate.analysis

For a swarm returned by specialist_swarm(...):

  • swarm.diagram
  • swarm.analysis

Bi-Temporal Memory

Append-only factual memory with dual time axes (valid time vs record time) for auditable decision-making. Stages can read from and write to a shared BiTemporalMemory substrate, enabling belief-state reconstruction ("what did the organism know when stage X decided?").

from operon_ai import BiTemporalMemory, MockProvider, Nucleus, SkillStage, skill_organism

mem = BiTemporalMemory()
nucleus = Nucleus(provider=MockProvider(responses={}))

organism = skill_organism(
    stages=[
        SkillStage(
            name="research",
            role="Researcher",
            handler=lambda task: {"risk": "medium", "sector": "fintech"},
            emit_output_fact=True,  # records output under subject=task
        ),
        SkillStage(
            name="strategist",
            role="Strategist",
            handler=lambda task, state, outputs, stage, view: f"Recommend based on {len(view.facts)} facts",
            read_query="Review account acct:1",  # must match the task string used as subject
        ),
    ],
    fast_nucleus=nucleus,
    deep_nucleus=nucleus,
    substrate=mem,
)

result = organism.run("Review account acct:1")
print(mem.history("Review account acct:1"))  # full append-only audit trail

See the Bi-Temporal Memory docs, examples 69–71, and the interactive explorer.

Convergence: Structural Analysis for External Frameworks

The operon_ai.convergence package provides typed adapters for 6 external agent frameworks (Swarms, DeerFlow, AnimaWorks, Ralph, A-Evolve, Scion) into Operon's structural analysis layer. No external dependencies — all operate on plain dicts.

from operon_ai import PatternLibrary
from operon_ai.convergence import (
    parse_swarm_topology, analyze_external_topology,
    seed_library_from_swarms, get_builtin_swarms_patterns,
)

# Analyze a Swarms workflow with Operon's epistemic theorems
topology = parse_swarm_topology(
    "HierarchicalSwarm",
    agent_specs=[
        {"name": "manager", "role": "Manager"},
        {"name": "coder", "role": "Developer"},
        {"name": "reviewer", "role": "Reviewer"},
    ],
    edges=[("manager", "coder"), ("manager", "reviewer")],
)
result = analyze_external_topology(topology)
print(result.risk_score, result.warnings)

# Seed a PatternLibrary from Swarms' built-in patterns
library = PatternLibrary()
seed_library_from_swarms(library, get_builtin_swarms_patterns())

Compile organisms into deployment configs for Swarms, DeerFlow, Ralph, and Scion:

from operon_ai.convergence import organism_to_swarms, organism_to_scion
swarms_config = organism_to_swarms(organism)
scion_config = organism_to_scion(organism, runtime="docker")

Compile to LangGraph with all structural guarantees enforced natively (requires pip install operon-ai[langgraph]):

from operon_ai.convergence.langgraph_compiler import run_organism_langgraph

# Works with any organism — multi-stage pipelines included
result = run_organism_langgraph(organism, task="Review this code")
print(result.output, result.interventions, result.certificates_verified)

See examples 86–108 and the Convergence docs.

Learn More

Public docs now live at banu.be/operon. The tracked source for that docs shell lives in the repo under docs/site/.

Direct links:

Contributing

Issues and pull requests are welcome. Start with the pattern-first examples, then drop into the lower-level layers only when the problem actually needs them.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

operon_ai-0.36.1.tar.gz (2.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

operon_ai-0.36.1-py3-none-any.whl (345.9 kB view details)

Uploaded Python 3

File details

Details for the file operon_ai-0.36.1.tar.gz.

File metadata

  • Download URL: operon_ai-0.36.1.tar.gz
  • Upload date:
  • Size: 2.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for operon_ai-0.36.1.tar.gz
Algorithm Hash digest
SHA256 cdaf9563661f7e2e7769ce56f7393401e5361beb193d9408b29d115d4a49e9e7
MD5 426f6e5b7c7508dfe6279e887b3afe1a
BLAKE2b-256 f735b2e1779169b4d206498ad477c71addf41586bd3f7b52d66071442643b46c

See more details on using hashes here.

Provenance

The following attestation bundles were made for operon_ai-0.36.1.tar.gz:

Publisher: publish.yml on coredipper/operon

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file operon_ai-0.36.1-py3-none-any.whl.

File metadata

  • Download URL: operon_ai-0.36.1-py3-none-any.whl
  • Upload date:
  • Size: 345.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for operon_ai-0.36.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a3a94bf22e55eab6b1b068eb66cdc18898787ee6ff1ce9589d59acd95aa01028
MD5 026440574b3ffe5549853f7f78789abc
BLAKE2b-256 b556f534c2e2b6e9ad55c0d9f95b739bbc6bf2fef694f3a3d312cc19e7beb669

See more details on using hashes here.

Provenance

The following attestation bundles were made for operon_ai-0.36.1-py3-none-any.whl:

Publisher: publish.yml on coredipper/operon

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page