Skip to main content

High-performance workflow engine for AI applications

Project description

Operonx

Tests Format Rust Docs Coverage PyPI crates.io Python License

Operonx is a workflow engine that runs anything as a workflow — from IO-bound AI tasks (LLMs, agents, RAG) to CPU-bound workloads needing native performance. Define complex pipelines as DAGs with async execution, built-in tracing, and a dual Python/Rust backend.

Why Operonx?

  • DAG-based workflows — nodes and edges, inspired by Airflow operators
  • Dual backend — Python for flexibility, Rust for raw speed (~8x faster on pure-compute)
  • Built-in tracing — Langfuse + OpenTelemetry, plus a local viewer
  • Provider agnostic — OpenAI, Azure, Gemini, Anthropic, vLLM, ONNX — swap with one line
  • Type-safe state — O(1) state access with schema validation

Quick Start

pip install operonx
import asyncio
from operonx.core import Operon, GraphOp, op, START, END, PARENT

@op
def greet(name: str):
    return {"message": f"Hello, {name}!"}

async def main():
    with GraphOp(name="hello") as graph:
        step = greet(name=PARENT["name"])
        START >> step >> END

    result = await Operon(graph).run(inputs={"name": "World"})
    print(result["message"])  # Hello, World!

asyncio.run(main())

LLM Integration

pip install "operonx[standard]"

Configure resources in resources.yaml and credentials in .env, then:

import asyncio
import operonx
from operonx.core import Operon, GraphOp, START, END, PARENT
from operonx.providers import chat

async def main():
    operonx.bootstrap()  # loads ./.env + ./resources.yaml

    with GraphOp(name="chat") as graph:
        c = chat(
            resource="gpt-4o",
            template={"system": "You are a helpful assistant.", "user": "{question}"},
            question=PARENT["question"],
        )
        START >> c >> END

    result = await Operon(graph).run(inputs={"question": "What is Python?"})
    print(result["content"])

asyncio.run(main())

See Resource Setup for details on bootstrap() and resources.yaml.

Installation

Operonx is a single Python package with optional extras for each integration:

pip install operonx                    # Core engine, no providers
pip install "operonx[standard]"        # Recommended — OpenAI + Langfuse + OTEL + serve
pip install "operonx[anthropic]"       # Anthropic-only
pip install "operonx[onnx]"            # Local ONNX inference
pip install "operonx[serve]"           # FastAPI + uvicorn HTTP server
pip install "operonx[all]"             # All providers and tracers (excludes huggingface)
Extra Contents
standard OpenAI, Langfuse, OpenTelemetry, FastAPI/uvicorn
anthropic Anthropic SDK
gemini Google Vertex AI
bedrock AWS Bedrock
onnx ONNX Runtime + tokenizers
huggingface transformers + torch (heavy — ~2.5 GB)
langfuse Langfuse tracer
otel OpenTelemetry tracer
serve FastAPI + uvicorn HTTP server
all Everything except huggingface
dev pytest, ruff, mkdocs

Rust users:

cargo add operonx

Tracing

from operonx.telemetry.tracers import LangfuseTracer

engine = Operon(graph, tracer=LangfuseTracer(resource="langfuse:default"))

Backends supported: Langfuse, OpenTelemetry. Configure via resources.yaml.

Documentation

Need Go to
Runnable examples examples/
Architecture docs/architecture/
User guide docs/guide/
API reference https://batman1m2001-cyber.github.io/Operonx/

Contributing

See CONTRIBUTING.md.

git clone https://github.com/batman1m2001-cyber/Operonx.git
cd Operonx
uv sync --all-extras
pre-commit install
uv run pytest tests/ -m "not integration"

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

operonx-0.7.1.tar.gz (199.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

operonx-0.7.1-py3-none-any.whl (276.1 kB view details)

Uploaded Python 3

File details

Details for the file operonx-0.7.1.tar.gz.

File metadata

  • Download URL: operonx-0.7.1.tar.gz
  • Upload date:
  • Size: 199.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for operonx-0.7.1.tar.gz
Algorithm Hash digest
SHA256 464f1cc82ee5ec3c911079ed2a69f6a75b2a6831c5f96108c11f966446882d0a
MD5 403bb50cc86246d5f2af4b9048afa0a3
BLAKE2b-256 f362925b770b821175b8ddee7b400dfa96b51e0a321eff8ee6d2a3152b570963

See more details on using hashes here.

File details

Details for the file operonx-0.7.1-py3-none-any.whl.

File metadata

  • Download URL: operonx-0.7.1-py3-none-any.whl
  • Upload date:
  • Size: 276.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for operonx-0.7.1-py3-none-any.whl
Algorithm Hash digest
SHA256 2c3955b9e2c4fd3be0998e49725202110120618f19416034598da0e948d6a7d5
MD5 524c3d6654a9a20b236324ace273ef22
BLAKE2b-256 2e92ad083aa2821db1ebdd4257d5e9ecf98b7c53e6930efd4a09e93dfc362146

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page