Skip to main content

High-performance workflow engine for AI applications

Project description

Operonx

Tests Format Rust Docs Coverage PyPI crates.io Python License

Operonx is a workflow engine that runs anything as a workflow — from IO-bound AI tasks (LLMs, agents, RAG) to CPU-bound workloads needing native performance. Define complex pipelines as DAGs with async execution, built-in tracing, and a dual Python/Rust backend.

Why Operonx?

  • DAG-based workflows — nodes and edges, inspired by Airflow operators
  • Dual backend — Python for flexibility, Rust for raw speed (~8x faster on pure-compute)
  • Built-in tracing — Langfuse + OpenTelemetry, plus a local viewer
  • Provider agnostic — OpenAI, Azure, Gemini, Anthropic, vLLM, ONNX — swap with one line
  • Type-safe state — O(1) state access with schema validation

Quick Start

pip install operonx
import asyncio
from operonx.core import Operon, GraphOp, op, START, END, PARENT

@op
def greet(name: str):
    return {"message": f"Hello, {name}!"}

async def main():
    with GraphOp(name="hello") as graph:
        step = greet(name=PARENT["name"])
        START >> step >> END

    result = await Operon(graph).run(inputs={"name": "World"})
    print(result["message"])  # Hello, World!

asyncio.run(main())

LLM Integration

pip install "operonx[standard]"

Configure resources in resources.yaml and credentials in .env, then:

import asyncio
import operonx
from operonx.core import Operon, GraphOp, START, END, PARENT
from operonx.providers import chat

async def main():
    operonx.bootstrap()  # loads ./.env + ./resources.yaml

    with GraphOp(name="chat") as graph:
        c = chat(
            resource="gpt-4o",
            template={"system": "You are a helpful assistant.", "user": "{question}"},
            question=PARENT["question"],
        )
        START >> c >> END

    result = await Operon(graph).run(inputs={"question": "What is Python?"})
    print(result["content"])

asyncio.run(main())

See Resource Setup for details on bootstrap() and resources.yaml.

Installation

Operonx is a single Python package with optional extras for each integration:

pip install operonx                    # Core engine, no providers
pip install "operonx[standard]"        # Recommended — OpenAI + Langfuse + OTEL + serve
pip install "operonx[anthropic]"       # Anthropic-only
pip install "operonx[onnx]"            # Local ONNX inference
pip install "operonx[serve]"           # FastAPI + uvicorn HTTP server
pip install "operonx[all]"             # All providers and tracers (excludes huggingface)
Extra Contents
standard OpenAI, Langfuse, OpenTelemetry, FastAPI/uvicorn
anthropic Anthropic SDK
gemini Google Vertex AI
bedrock AWS Bedrock
onnx ONNX Runtime + tokenizers
huggingface transformers + torch (heavy — ~2.5 GB)
langfuse Langfuse tracer
otel OpenTelemetry tracer
serve FastAPI + uvicorn HTTP server
all Everything except huggingface
dev pytest, ruff, mkdocs

Rust users:

cargo add operonx

Tracing

from operonx.telemetry.tracers import LangfuseTracer

engine = Operon(graph, tracer=LangfuseTracer(resource="langfuse:default"))

Backends supported: Langfuse, OpenTelemetry. Configure via resources.yaml.

Documentation

Need Go to
Runnable examples examples/
Architecture docs/architecture/
User guide docs/guide/
API reference https://batman1m2001-cyber.github.io/Operonx/

Contributing

See CONTRIBUTING.md.

git clone https://github.com/batman1m2001-cyber/Operonx.git
cd Operonx
uv sync --all-extras
pre-commit install
uv run pytest tests/ -m "not integration"

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

operonx-0.6.2.tar.gz (195.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

operonx-0.6.2-py3-none-any.whl (270.5 kB view details)

Uploaded Python 3

File details

Details for the file operonx-0.6.2.tar.gz.

File metadata

  • Download URL: operonx-0.6.2.tar.gz
  • Upload date:
  • Size: 195.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for operonx-0.6.2.tar.gz
Algorithm Hash digest
SHA256 5814505667a619966845ded98c495fefa59b1683a653a700b770437da31b30a9
MD5 eb81eb6f5543b2cb2d134b0d13471285
BLAKE2b-256 58c1fbda23199bc95a5d91ec539efffa34e95c3a9581f43635009f0bdf08d583

See more details on using hashes here.

File details

Details for the file operonx-0.6.2-py3-none-any.whl.

File metadata

  • Download URL: operonx-0.6.2-py3-none-any.whl
  • Upload date:
  • Size: 270.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for operonx-0.6.2-py3-none-any.whl
Algorithm Hash digest
SHA256 91a0c23f8e84e1c7be104b12eb7512cc3c727115859d71d7070c6a8e50a924cf
MD5 68aa9a8772ca0a387dda660b05e036ce
BLAKE2b-256 c53f925442066c278b13044f18fdc4992d0e330c5f10ab772ce2ffe6b1951a19

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page