Skip to main content

High-performance workflow engine for AI applications

Project description

Operonx

Tests Format Rust Docs Coverage PyPI crates.io Python License

Operonx is a workflow engine that runs anything as a workflow — from IO-bound AI tasks (LLMs, agents, RAG) to CPU-bound workloads needing native performance. Define complex pipelines as DAGs with async execution, built-in tracing, and a dual Python/Rust backend.

Why Operonx?

  • DAG-based workflows — nodes and edges, inspired by Airflow operators
  • Dual backend — Python for flexibility, Rust for raw speed (~8x faster on pure-compute)
  • Built-in tracing — Langfuse + OpenTelemetry, plus a local viewer
  • Provider agnostic — OpenAI, Azure, Gemini, Anthropic, vLLM, ONNX — swap with one line
  • Type-safe state — O(1) state access with schema validation

Quick Start

pip install operonx
import asyncio
from operonx.core import Operon, GraphOp, op, START, END, PARENT

@op
def greet(name: str):
    return {"message": f"Hello, {name}!"}

async def main():
    with GraphOp(name="hello") as graph:
        step = greet(name=PARENT["name"])
        START >> step >> END

    result = await Operon(graph).run(inputs={"name": "World"})
    print(result["message"])  # Hello, World!

asyncio.run(main())

LLM Integration

pip install "operonx[standard]"

Configure resources in resources.yaml and credentials in .env, then:

import asyncio
import operonx
from operonx.core import Operon, GraphOp, START, END, PARENT
from operonx.providers import chat

async def main():
    operonx.bootstrap()  # loads ./.env + ./resources.yaml

    with GraphOp(name="chat") as graph:
        c = chat(
            resource="gpt-4o",
            template={"system": "You are a helpful assistant.", "user": "{question}"},
            question=PARENT["question"],
        )
        START >> c >> END

    result = await Operon(graph).run(inputs={"question": "What is Python?"})
    print(result["content"])

asyncio.run(main())

See Resource Setup for details on bootstrap() and resources.yaml.

Installation

Operonx is a single Python package with optional extras for each integration:

pip install operonx                    # Core engine, no providers
pip install "operonx[standard]"        # Recommended — OpenAI + Langfuse + OTEL + serve
pip install "operonx[anthropic]"       # Anthropic-only
pip install "operonx[onnx]"            # Local ONNX inference
pip install "operonx[serve]"           # FastAPI + uvicorn HTTP server
pip install "operonx[all]"             # All providers and tracers (excludes huggingface)
Extra Contents
standard OpenAI, Langfuse, OpenTelemetry, FastAPI/uvicorn
anthropic Anthropic SDK
gemini Google Vertex AI
bedrock AWS Bedrock
onnx ONNX Runtime + tokenizers
huggingface transformers + torch (heavy — ~2.5 GB)
langfuse Langfuse tracer
otel OpenTelemetry tracer
serve FastAPI + uvicorn HTTP server
all Everything except huggingface
dev pytest, ruff, mkdocs

Rust users:

cargo add operonx

Tracing

from operonx.telemetry.tracers import LangfuseTracer

engine = Operon(graph, tracer=LangfuseTracer(resource="langfuse:default"))

Backends supported: Langfuse, OpenTelemetry. Configure via resources.yaml.

Documentation

Need Go to
Runnable examples examples/
Architecture docs/architecture/
User guide docs/guide/
API reference https://batman1m2001-cyber.github.io/Operonx/

Contributing

See CONTRIBUTING.md.

git clone https://github.com/batman1m2001-cyber/Operonx.git
cd Operonx
uv sync --all-extras
pre-commit install
uv run pytest tests/ -m "not integration"

License

Apache 2.0

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

operonx-0.7.0.tar.gz (199.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

operonx-0.7.0-py3-none-any.whl (276.1 kB view details)

Uploaded Python 3

File details

Details for the file operonx-0.7.0.tar.gz.

File metadata

  • Download URL: operonx-0.7.0.tar.gz
  • Upload date:
  • Size: 199.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for operonx-0.7.0.tar.gz
Algorithm Hash digest
SHA256 21d2332bddd1b76998cd217d37676b2ed81f1e7a7f36514e253913eee638d6a1
MD5 376042c0c472fe0e295d9e8fae37fb2a
BLAKE2b-256 9e013128246d3b4cec53c22db34e003d006924c17d86aff9101507db46f7c198

See more details on using hashes here.

File details

Details for the file operonx-0.7.0-py3-none-any.whl.

File metadata

  • Download URL: operonx-0.7.0-py3-none-any.whl
  • Upload date:
  • Size: 276.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for operonx-0.7.0-py3-none-any.whl
Algorithm Hash digest
SHA256 d53f6c70568c925da716a9e08df5b86ad82e6a8472cb8c62359c0d29a43df324
MD5 79214fc228a6ade806d7d670c580ad45
BLAKE2b-256 97f38bb2193dda7bd1035207e7494915d0788c368631f20d30b49652e366754a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page