Skip to main content

Debuggable runtime for AI agent pipelines

Project description


Binex

Debuggable runtime for AI agent pipelines
Orchestrate multi-agent workflows. Trace every step. Replay and diff runs.

PyPI Python License CI Docs

Installation · Quickstart · Demo · Documentation · Issues



What is Binex?

Binex is a debuggable runtime for AI agent workflows.

It executes DAG-based pipelines of agents (LLM, local, remote A2A, or human), tracks artifacts between steps, and allows replaying and inspecting runs.

Key features:

  • DAG-based execution — define agent pipelines as readable YAML, not tangled code
  • Artifact lineage — every input and output tracked across the entire pipeline
  • Replayable workflows — re-run with agent swaps, compare results
  • Full tracing — every node call, every artifact, every millisecond recorded
  • Post-mortem debugging — inspect any run after the fact with rich reports
  • Run diffing — compare two executions side-by-side to spot regressions
  • Human-in-the-loop — approval gates and free-text input with conditional branching
  • Budget & cost tracking — per-node cost records, budget enforcement (stop/warn), CLI cost inspection
  • CLI-first DX — everything accessible from the terminal

(back to top)


Installation

Install from PyPI:

pip install binex

For rich colored output:

pip install binex[rich]

(back to top)


Quickstart

Run the built-in demo (no config needed):

binex hello

Create a workflow file workflow.yaml:

name: hello
nodes:
  greet:
    agent: "local://echo"
    inputs:
      msg: "hello world"
    outputs: [response]

  respond:
    agent: "local://echo"
    inputs:
      greeting: "${greet.response}"
    depends_on: [greet]

Run it:

binex run workflow.yaml

Inspect the run:

binex debug latest
binex trace latest
See it in action
$ binex hello

Running built-in hello-world workflow...

  [1/2] greeter ...
  [greeter] -> result:
Hello from Binex!

  [2/2] responder ...
  [responder] -> result:
{"greeter": "Hello from Binex!"}

Run completed (2/2 nodes)
Run ID: run_d71c9a50

Next steps:
  binex debug run_d71c9a50    — inspect the run
  binex init                  — create your own project
  binex run examples/simple.yaml — try a workflow file

(back to top)


Demo

A multi-provider research pipeline: Ollama runs locally for planning and summarization, OpenRouter calls cloud models for parallel research — all in one YAML file.

Requirements to run this demo
  • Ollama installed and running locally
  • Model pulled: ollama pull gemma3:4b
  • Free OpenRouter API key (set OPENROUTER_API_KEY in .env)
  • Binex installed: pip install binex
# examples/multi-provider-demo.yaml
name: multi-provider-research

nodes:
  user_input:
    agent: "human://input"

  planner:
    agent: "llm://ollama/gemma3:4b"
    system_prompt: "Create a structured research plan with 3 subtopics..."
    inputs: { topic: "${user_input.result}" }
    depends_on: [user_input]

  researcher1:
    agent: "llm://openrouter/nvidia/nemotron-3-super-120b-a12b:free"
    inputs: { plan: "${planner.result}" }
    depends_on: [planner]

  researcher2:
    agent: "llm://openrouter/nvidia/nemotron-3-super-120b-a12b:free"
    inputs: { plan: "${planner.result}" }
    depends_on: [planner]

  summarizer:
    agent: "llm://ollama/gemma3:4b"
    inputs: { research1: "${researcher1.result}", research2: "${researcher2.result}" }
    depends_on: [researcher1, researcher2]
Demo DAG

(back to top)


Start Wizard

Create a full research pipeline from a template in seconds:

binex start

(back to top)


Explore Dashboard

Interactive run inspector — trace, graph, cost, artifacts, node detail, debug — all in one place:

binex explore

(back to top)


How It Works

Define a workflow in YAML. Binex builds a DAG, schedules nodes respecting dependencies, dispatches each to the right agent adapter, and records everything.

name: research-pipeline
description: "Fan-out research with human approval"

nodes:
  planner:
    agent: "llm://openai/gpt-4"
    system_prompt: "Break this topic into 3 research questions"
    inputs:
      topic: "${user.topic}"
    outputs: [questions]

  researcher_1:
    agent: "llm://anthropic/claude-sonnet-4-20250514"
    inputs: { question: "${planner.questions}" }
    outputs: [findings]
    depends_on: [planner]

  researcher_2:
    agent: "a2a://localhost:8001"
    inputs: { question: "${planner.questions}" }
    outputs: [findings]
    depends_on: [planner]

  reviewer:
    agent: "human://approve"
    inputs:
      draft: "${researcher_1.findings}"
    outputs: [decision]
    depends_on: [researcher_1, researcher_2]

  summarizer:
    agent: "llm://openai/gpt-4"
    inputs:
      research: "${researcher_1.findings}"
    outputs: [summary]
    depends_on: [reviewer]
    when: "${reviewer.decision} == approved"
Workflow DAG

(back to top)


Architecture

Architecture

(back to top)


Features

Agent Adapters

Prefix Adapter Description
local:// LocalPythonAdapter In-process Python callable
llm:// LLMAdapter LLM completion via LiteLLM (40+ providers)
a2a:// A2AAgentAdapter Remote agent via A2A protocol
human://input HumanInputAdapter Terminal prompt for free-text input
human://approve HumanApprovalAdapter Approval gate with conditional branching

CLI Commands

Command Description
binex run <workflow.yaml> Execute a workflow
binex debug <run-id|latest> Post-mortem inspection (--json, --errors, --node, --rich)
binex trace <run-id> Execution timeline, node details, or DAG graph
binex replay <run-id> Re-run with optional agent swaps
binex diff <run1> <run2> Compare two runs side-by-side
binex artifacts list <run-id> List artifacts with lineage tracking
binex validate <workflow.yaml> Validate YAML before execution
binex scaffold workflow "A -> B" Generate workflow from DSL shorthand
binex init Interactive project setup
binex dev up Start Docker dev stack
binex doctor Check system health
binex cost show <run-id> Cost breakdown per node (--json)
binex cost history <run-id> Chronological cost events (--json)
binex explore Interactive browser for runs and artifacts
binex hello Zero-config demo

LLM Providers

Out-of-the-box support for 9 providers via LiteLLM:

OpenAI · Anthropic · Google Gemini · Ollama · OpenRouter · Groq · Mistral · DeepSeek · Together AI

(back to top)


Examples

Example workflows are available in the examples/ directory:

Example What it demonstrates
simple.yaml Minimal two-node pipeline
diamond.yaml Diamond dependency pattern
fan-out-fan-in.yaml Parallel execution with aggregation
human-in-the-loop.yaml Approval gates and conditional branching
multi-provider-demo.yaml Multiple LLM providers in one workflow
a2a-multi-agent.yaml Remote agents via A2A protocol
conditional-routing.yaml Branch based on node output
map-reduce.yaml MapReduce-style aggregation

(back to top)


Documentation

Full documentation is available at alexli18.github.io/binex:

(back to top)


Project Structure

src/binex/
├── adapters/        # Agent execution backends (local, LLM, A2A, human)
├── agents/          # Built-in agent implementations
├── cli/             # Click CLI commands
├── graph/           # DAG construction + topological scheduling
├── models/          # Pydantic v2 domain models
├── registry/        # FastAPI agent registry service
├── runtime/         # Orchestrator, dispatcher, lifecycle
├── stores/          # SQLite execution + filesystem artifact persistence
├── trace/           # Debug reports, lineage, timeline, diffing
├── workflow_spec/   # YAML loader + validator + variable resolution
└── tools.py         # Tool calling support (@tool decorator)

(back to top)


Roadmap

See ROADMAP.md for upcoming features.

(back to top)


Contributing

Contributions are welcome! See CONTRIBUTING.md for development setup and guidelines.

(back to top)


License

Distributed under the MIT License. See LICENSE for more information.

(back to top)


Built with focus on debuggability, because AI agents shouldn't be black boxes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

binex-0.2.0.tar.gz (14.7 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

binex-0.2.0-py3-none-any.whl (210.5 kB view details)

Uploaded Python 3

File details

Details for the file binex-0.2.0.tar.gz.

File metadata

  • Download URL: binex-0.2.0.tar.gz
  • Upload date:
  • Size: 14.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.2

File hashes

Hashes for binex-0.2.0.tar.gz
Algorithm Hash digest
SHA256 af5db3fde1ab31a2dabe0ca795edb7b62eb480e2f923fda14774d44a913057d5
MD5 6564e064931f1b45907ff35f8e42bb3e
BLAKE2b-256 32cdd2bf71fb8423fed956b0866df3a2222f2b951bbd27cd64b7ce0ee110c646

See more details on using hashes here.

File details

Details for the file binex-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: binex-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 210.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.2

File hashes

Hashes for binex-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 12cfe990fda14e536f9529faf67d874233b780c8ff6e0e4533b1a4c32a4bd929
MD5 cd9dadf0362823e1dc83454823e2dbf7
BLAKE2b-256 7f2c715c477f4b3ecb2d77b46b2a16c2ee5ec09809269faf9b8792158fdbc0d6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page