Skip to main content

Multi-Agent Orchestration Framework for Python

Project description

orxhestra logo

Multi-agent orchestration framework for Python — turn any agent setup into a CLI or server.

PyPI Python License


Compose multi-agent AI systems with async event streaming, agent hierarchies, and built-in support for MCP and A2A protocols.

Orx CLI

Turn any orx.yaml agent setup into an interactive terminal agent. Ships with a coding agent out of the box — or compose your own.

Looking for a full-featured coding agent? Check out orxhestra-code — an enhanced coding agent built on orxhestra with permissions, multi-file editing, and project-aware context.

pip install orxhestra[cli,openai]
orx
+-- orx - terminal coding agent ------------------------------------+
|  model: gpt-5.4   workspace: ~/my-project   /help for commands    |
+-------------------------------------------------------------------+

orx> add error handling to the API routes

  > read_file(src/api/routes.py)
  > grep(pattern="raise", path=src/api/)
  > write_todos(3 tasks)

  Tasks
  * Add try/except to all route handlers  [in progress]
  - Add custom error response model
  - Write tests for error cases

  > edit_file(src/api/routes.py)
  > shell_exec(pytest tests/test_api.py)
  4 passed

  Done - added structured error handling to all 4 route handlers
  with a custom ErrorResponse model. All tests pass.

Features

  • 29 LLM providers — OpenAI, Azure OpenAI, Anthropic, Google, Mistral, Cohere, Groq, DeepSeek, Ollama, and 20 more via --model
  • Streaming — real-time token rendering with Markdown formatting
  • Tool approval — prompts before destructive operations (write, edit, shell)
  • Task planning — structured todo lists visible in the terminal
  • Sub-agent delegation — spawn isolated agents for complex subtasks
  • Auto-memory — persistent per-project memories across sessions (4 types: user, feedback, project, reference)
  • Dark/light theme — auto-detects terminal, toggle with /theme
  • Background tasks — spawn and monitor async sub-agent tasks
  • Smart file reading — offset/limit pagination with line numbers, 256KB size guard
  • Local context injection — auto-detects language, git state, package manager, project tree
  • Context summarization — auto-compacts long conversations, /compact command
  • Orx YAML — run any orx.yaml agent team: orx my-agents.yaml

Usage

orx                               # interactive REPL (default model)
orx --model claude-sonnet-4-6     # use a specific model
orx -c "fix the failing tests"    # single-shot command
orx my-agents.yaml                # run a custom orx file
orx --auto-approve                # skip approval prompts
orx orx.yaml --serve -p 9000      # start as A2A server

Commands

Command Description
/model <name> Switch model mid-session
/clear Reset conversation
/compact Summarize old messages to free context
/todos Show current task list
/memory List saved memories
/theme Switch dark/light theme
/session Session info (includes active signer DID when identity is on)
/undo Remove last turn
/retry Re-run last message
/copy Copy last response
/help Show all commands
/exit Exit

orx identity — Ed25519 signing

Opt-in identity for every agent the CLI spawns. Events get signed with Ed25519, chained per branch, and (optionally) audited by an AttestationProvider.

orx identity init                          # generate a keypair at ~/.orx/identity.key
orx identity show                           # print the DID + public-key multibase
orx identity did-web example.com agents     # render a did.json for hosting

orx --identity ~/.orx/identity.key          # attach identity to every agent
export ORX_IDENTITY=~/.orx/identity.key     # or via env

See Composer → Identity, trust, and attestation for the YAML equivalents.


Quickstart (SDK)

pip install orxhestra
# or
uv add orxhestra
from orxhestra import LlmAgent, Runner, InMemorySessionService

agent = LlmAgent(
    name="assistant",
    model="gpt-5.4",
    instructions="You are a helpful assistant.",
)

runner = Runner(agent=agent, session_service=InMemorySessionService())
response = await runner.run(user_id="user1", session_id="s1", new_message="Hello!")

for event in response:
    print(event.content)

[!TIP] For persistent database sessions, install the database extra: pip install orxhestra[database]

[!TIP] For full documentation, guides, and API reference, visit docs.orxhestra.com.

Features

  • Agent ensemble - LLM, ReAct, Sequential, Parallel, and Loop agents
  • 29 LLM providers - OpenAI, Azure OpenAI, Anthropic, Google, Mistral, Cohere, Groq, DeepSeek, Ollama, and 20 more
  • Event streaming - Async event-driven architecture with real-time streaming
  • Composer - Declarative YAML with four pluggable registries: custom agent types, LLM providers, built-in tools, and tool-type resolvers
  • Tools - Function tools, filesystem tools, agent-as-tool, shell, transfer routing, long-running tools, and register_tool_resolver for whole new tool kinds
  • Planners - Choreograph task execution with PlanReAct and TaskPlanner strategies
  • Skills - Reusable, composable agent repertoires (Agent Skills Protocol)
  • MCP - Full-spec Model Context Protocol client (tools, resources, prompts, sampling, logging, progress, elicitation) plus adapters that turn MCP prompts into LangChain messages or tools
  • A2A - Full v1.0 server + client with Ed25519 message signing and verification_method on agent cards
  • Identity / Trust / Attestation (opt-in) - Sign every event, verify peers via DID, hash-chained audit log with a pluggable AttestationProvider — all wireable from a YAML block or a single orx --identity flag
  • Auto-memory - Persistent memories with save_memory tool (user, feedback, project, reference)
  • Background tasks - Async sub-agent task lifecycle with spawn and monitor
  • Deprecation decorators - @deprecated and @deprecated_param for clean API evolution
  • Tracing - Built-in support for Langfuse, LangSmith, and custom callbacks

Agents at a glance

Agent Description
LlmAgent Chat model agent with tools, instructions, and structured output
ReActAgent Reasoning + acting loop with automatic tool use
SequentialAgent Runs sub-agents in order
ParallelAgent Runs sub-agents concurrently
LoopAgent Repeats a sub-agent until exit condition
A2AAgent Connects to remote agents via A2A protocol

Composer

Define entire agent orchestras in a single YAML file — no Python wiring needed. Compose LLM agents, loops, pipelines, tools, and review cycles declaratively. The example below builds a coding agent that plans, implements with filesystem + shell access, and self-reviews in a loop. Identity signing + local audit are opt-in — remove the last two blocks to turn them off.

defaults:
  model:
    provider: openai
    name: gpt-5.4

tools:
  exit:
    builtin: "exit_loop"
  filesystem:
    builtin: "filesystem"
  shell:
    builtin: "shell"

agents:
  planner:
    type: llm
    description: "Plans the implementation steps for the coder agent."
    instructions: |
      Output a numbered list of concrete steps the coder
      should execute. Each step must be an actionable file
      operation or shell command.

  coder:
    type: llm
    description: "Implements code changes with filesystem and shell access."
    instructions: |
      Follow the plan from the previous step exactly.
      Use filesystem tools to create files and shell to
      run commands. Never ask the user to do anything.
    tools:
      - filesystem
      - shell

  reviewer:
    type: llm
    description: "Reviews changes and approves or requests fixes."
    instructions: |
      Check files exist and look correct. If done, call
      exit_loop. Otherwise describe what needs fixing.
    tools:
      - exit

  dev_loop:
    type: loop
    agents: [coder, reviewer]
    max_iterations: 10

  coordinator:
    type: sequential
    agents: [planner, dev_loop]

main_agent: coordinator

runner:
  app_name: coding-agent
  session_service: memory

# Optional: sign every event + write a hash-chained audit log.
identity:
  signing_key: ./keys/agent.key         # orx identity init --path ./keys/agent.key
  did_method: key
attestation:
  provider: local
  path: ./audit

Run it as an interactive CLI or expose it as an A2A server:

orx orx.yaml                    # interactive terminal agent
orx orx.yaml --serve -p 9000    # A2A server on port 9000
# test the server
curl -X POST http://localhost:9000/ \
  -H "Content-Type: application/json" \
  -d '{
    "jsonrpc": "2.0", "id": "1",
    "method": "message/send",
    "params": {
      "message": {
        "role": "user",
        "parts": [{"text": "Hello!", "mediaType": "text/plain"}]
      }
    }
  }'

Docker

docker run -e OPENAI_API_KEY=$OPENAI_API_KEY \
  -v ./orx.yaml:/app/orx.yaml \
  nicolaimtlassen/orxhestra

Documentation

  • Getting Started — Install and run your first agent (YAML or Python)
  • Composer overview — YAML-based agent composition (recommended starting point)
  • Composer schema reference — Field-by-field reference for every orx.yaml block
  • Extending the composer — Register custom agent types, LLM providers, built-in tools, and tool resolvers
  • Agents — Agent types and configuration
  • Tools — Built-in and custom tools
  • Integrations — MCP and A2A setup
  • Skills — Code-level CLI skill references (agent-tools, callbacks, planners, streaming, and more)
  • orxhestra-code — Enhanced coding agent with permissions, multi-file editing, and project context

Acknowledgments

This project is built on the shoulders of several outstanding open-source projects and research efforts:

Special thanks to the open-source AI community for pushing the boundaries of what's possible with agent frameworks.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

orxhestra-0.1.4.tar.gz (296.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

orxhestra-0.1.4-py3-none-any.whl (310.1 kB view details)

Uploaded Python 3

File details

Details for the file orxhestra-0.1.4.tar.gz.

File metadata

  • Download URL: orxhestra-0.1.4.tar.gz
  • Upload date:
  • Size: 296.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for orxhestra-0.1.4.tar.gz
Algorithm Hash digest
SHA256 c436dc3bfc3321a36b62d8d6ec7b3abdbe74fbc2e187117e59e5cffe0ef3d94b
MD5 e684c9b863f3c60aae3f2f9e5574063c
BLAKE2b-256 f908fd093d687bfd67d4bf5c1d7a2f09c7d1f4a43ed40e7b99d1dce1008d5fd6

See more details on using hashes here.

Provenance

The following attestation bundles were made for orxhestra-0.1.4.tar.gz:

Publisher: publish.yml on NicolaiLassen/orxhestra

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file orxhestra-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: orxhestra-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 310.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for orxhestra-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 96fb7f49e2c787935a0e0d6f4ad89eff68fbf31a01b9be63a22badfb7263db71
MD5 5e1fb0deffd20057bb944cd1329aed74
BLAKE2b-256 864ae01b89c42cf0765b2b50b051cb9555bc1b015bdb7c10915dc8a083def486

See more details on using hashes here.

Provenance

The following attestation bundles were made for orxhestra-0.1.4-py3-none-any.whl:

Publisher: publish.yml on NicolaiLassen/orxhestra

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page