Skip to main content

Modular agent orchestrator for reasoning pipelines

Project description

OrKa-Reasoning

OrKa Logo

Tests codecov

PyPi Docker

WEB

Orchestrator Kit for Agentic Reasoning - OrKa is a modular AI orchestration system that transforms Large Language Models (LLMs) into composable agents capable of reasoning, fact-checking, and constructing answers with transparent traceability.

๐Ÿš€ Features

  • Modular Agent Orchestration: Define and manage agents using intuitive YAML configurations.
  • Configurable Reasoning Paths: Utilize Redis streams to set up dynamic reasoning workflows.
  • Comprehensive Logging: Record and trace every step of the reasoning process for transparency.
  • Built-in Integrations: Support for OpenAI agents, web search functionalities, routers, and validation mechanisms.
  • Command-Line Interface (CLI): Execute YAML-defined workflows with ease.

๐ŸŽฅ OrKa Video Overview

Watch the video

Click the thumbnail above to watch a quick video demo of OrKa in action โ€” how it uses YAML to orchestrate agents, log reasoning, and build transparent LLM workflows.

๐Ÿ† Why Choose OrKa?

OrKa stands out from other AI orchestration tools by focusing on transparency, modularity, and cognitive science-inspired workflows.

OrKa vs. Alternatives

Feature OrKa LangChain CrewAI LlamaIndex
Focus Transparent reasoning Chaining LLM calls Multi-agent simulation RAG & indexing
Configuration YAML-driven Python code Python code Python code
Traceability Complete Redis logs Limited Basic Limited
Modularity Fully modular Semi-modular Agent-centric Index-centric
Workflow Viz Built-in (OrkaUI) Third-party Limited Limited
Learning Curve Low (YAML) Medium Medium Medium
Reasoning Patterns Decision trees, fork/join Sequential Role-based Query-focused

Architecture Overview

OrKa uses a modular architecture with clear separation of concerns:

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   YAML      โ”‚     โ”‚  Orchestrator   โ”‚     โ”‚   Agents    โ”‚
โ”‚ Definition  โ”œโ”€โ”€โ”€โ”€โ–บโ”‚  (Control Flow) โ”œโ”€โ”€โ”€โ”€โ–บโ”‚ (Reasoning) โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”˜
                             โ”‚                     โ”‚
                     โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
                     โ”‚        Redis/Kafka Streams          โ”‚
                     โ”‚  (Message Passing & Observability)  โ”‚
                     โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”˜
                                                         โ”‚
                                                 โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
                                                 โ”‚   OrKa UI      โ”‚
                                                 โ”‚  (Monitoring)  โ”‚
                                                 โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

โšก 5-Minute Quickstart

Get OrKa running in 5 minutes:

# Install via pip
pip install orka-reasoning

# Create a simple test.yml file
cat > test.yml << EOF
orchestrator:
  id: simple-test
  strategy: sequential
  queue: orka:test
  agents:
    - classifier
    - answer_builder

agents:
  - id: classifier
    type: openai-classification
    prompt: Classify this as [tech, science, other]
    options: [tech, science, other]
    queue: orka:classify

  - id: answer_builder
    type: openai-answer
    prompt: |
      Topic: {{ previous_outputs.classifier }}
      Generate a paragraph about: {{ input }}
    queue: orka:answer
EOF

# Set up your OpenAI key
export OPENAI_API_KEY=your-key-here

# Run OrKa with your test input
python -m orka.orka_cli ./test.yml "Quantum computing applications"

This will classify your input and generate a response based on the classification.


๐Ÿ› ๏ธ Installation

PIP Installation

  1. Install the Package:

    pip install orka-reasoning
    
  2. Add ENV variables:

    export OPENAI_API_KEY=<your opena AI key>
    
  3. Install Additional Dependencies:

    pip install fastapi uvicorn
    
  4. Start the Services:

    python -m orka.orka_start
    

Local Development Installation

  1. Clone the Repository:

    git clone https://github.com/marcosomma/orka-resoning.git
    cd orka
    
  2. Install Dependencies:

    pip install -e .
    pip install fastapi uvicorn
    
  3. Start the Services:

    python -m orka.orka_start
    

Running OrkaUI Locally

To run the OrkaUI locally and connect it with your local OrkaBackend:

  1. Pull the OrkaUI Docker image:

    docker pull marcosomma/orka-ui:latest
    
  2. Run the OrkaUI container:

    docker run -d \
      -p 8080:80 \
      -e VITE_API_URL_LOCAL=http://localhost:8000/api/run@dist  \
      --name orka-ui \
      marcosomma/orka-ui:latest
    

This will start the OrkaUI on port 8080, connected to your local OrkaBackend running on port 8000.

๐Ÿ“š Common Patterns & Recipes

1. Question-Answering with Web Search

orchestrator:
  id: qa-system
  strategy: sequential
  agents:
    - search_needed
    - router
    - web_search
    - answer_builder

agents:
  - id: search_needed
    type: openai-binary
    prompt: Does this question require recent information? Return true/false.

  - id: router
    type: router
    params:
      decision_key: search_needed
      routing_map:
        "true": [web_search, answer_builder]
        "false": [answer_builder]

  - id: web_search
    type: duckduckgo
    prompt: Search for information about this query
    
  - id: answer_builder
    type: openai-answer
    prompt: |
      Build an answer using:
      {% if previous_outputs.search_needed == "true" %}
      Search results: {{ previous_outputs.web_search }}
      {% endif %}

2. Content Moderation Pipeline

orchestrator:
  id: content-moderation
  strategy: sequential
  agents:
    - toxic_check
    - sentiment
    - fork_analysis
    - join_analysis
    - final_decision

agents:
  - id: toxic_check
    type: openai-binary
    prompt: Is this content toxic or harmful? Return true/false.

  - id: fork_analysis
    type: fork
    targets:
      - [sentiment_analysis]
      - [bias_check]
      - [fact_validation]

  # ... other agents

3. Complex Decision Tree

orchestrator:
  id: approval-workflow
  strategy: decision-tree
  agents:
    - initial_check
    - router_approval

agents:
  - id: router_approval
    type: router
    params:
      decision_key: initial_check
      routing_map:
        "approved": [notify_success]
        "needs_revision": [request_changes]
        "rejected": [notify_rejection]

๐Ÿ“ YAML Configuration Structure

The YAML file specifies the agents and their interactions. Below is an example configuration:

orchestrator:
  id: fact-checker
  strategy: decision-tree
  queue: orka:fact-core
  agents:
    - domain_classifier
    - is_fact
    - validate_fact

agents:
  - id: domain_classifier
    type: openai-classification
    prompt: >
      Classify this question into one of the following domains:
      - science, geography, history, technology, date check, general
    options: [science, geography, history, technology, date check, general]
    queue: orka:domain

  - id: is_fact
    type: openai-binary
    prompt: >
      Is this a {{ input }} factual assertion that can be verified externally? Answer TRUE or FALSE.
    queue: orka:is_fact

  - id: validate_fact
    type: openai-binary
    prompt: |
      Given the fact "{{ input }}", and the search results "{{ previous_outputs.duck_search }}"?
    queue: validation_queue

For a comprehensive guide with detailed examples of all agent types, node configurations, and advanced patterns, see our YAML Configuration Guide.

From Monolithic Prompts to Agent Networks

OrKa helps you transform complex prompts like:

Classify this input as science/history/tech, then if it's a factual question requiring
research, search the web, extract relevant info, and compose a detailed answer using
correct formatting and citing sources.

Into a clear, maintainable agent network:

Input โ†’ Classification โ†’ Search Need Check โ†’ Router โ†’ Web Search โ†’ Answer Builder โ†’ Output

This provides transparency, reusability, and easier debugging at each step.

Key Sections

  • agents: Defines the individual agents involved in the workflow. Each agent has:

    • name: Unique identifier for the agent.
    • type: Specifies the agent's function (e.g., search, llm).
  • workflow: Outlines the sequence of interactions between agents:

    • from: Source agent or input.
    • to: Destination agent or output.

Settings such as the model and API keys are loaded from the .env file, keeping your configuration secure and flexible.

๐Ÿงช Example

To see OrKa in action, use the provided example.yml configuration:

python -m orka.orka_cli ./example.yml "What is the capital of France?" --log-to-file

This will execute the workflow defined in example.yml with the input question, logging each reasoning step.

๐Ÿ”ง Requirements

  • Python 3.8 or higher
  • Redis server
  • Docker (for containerized deployment)
  • Required Python packages:
    • fastapi
    • uvicorn
    • redis
    • pyyaml
    • litellm
    • jinja2
    • google-api-python-client
    • duckduckgo-search
    • python-dotenv
    • openai
    • async-timeout
    • pydantic
    • httpx

๐Ÿ“„ Usage

๐Ÿ“„ OrKa Nodes and Agents Documentation

๐Ÿ“Š Agents

BinaryAgent
  • Purpose: Classify an input into TRUE/FALSE.
  • Input: A dict containing a string under "input" key.
  • Output: A boolean value.
  • Typical Use: "Is this sentence a factual statement?"
ClassificationAgent
  • Purpose: Classify input text into predefined categories.
  • Input: A dict with "input".
  • Output: A string label from predefined options.
  • Typical Use: "Classify a sentence as science, history, or nonsense."
OpenAIBinaryAgent
  • Purpose: Use an LLM to binary classify a prompt into TRUE/FALSE.
  • Input: A dict with "input".
  • Output: A boolean.
  • Typical Use: "Is this a question?"
OpenAIClassificationAgent
  • Purpose: Use an LLM to classify input into multiple labels.
  • Input: Dict with "input".
  • Output: A string label.
  • Typical Use: "What domain does this question belong to?"
OpenAIAnswerBuilder
  • Purpose: Build a detailed answer from a prompt, usually enriched by previous outputs.
  • Input: Dict with "input" and "previous_outputs".
  • Output: A full textual answer.
  • Typical Use: "Answer a question combining search results and classifications."
DuckDuckGoAgent
  • Purpose: Perform a real-time web search using DuckDuckGo.
  • Input: Dict with "input" (the query string).
  • Output: A list of search result strings.
  • Typical Use: "Search for latest information about OrKa project."

๐Ÿงต Nodes

RouterNode
  • Purpose: Dynamically route execution based on a prior decision output.
  • Input: Dict with "previous_outputs".
  • Routing Logic: Matches a decision_key's value to a list of next agent ids.
  • Typical Use: "Route to search agents if external lookup needed; otherwise validate directly."
FailoverNode
  • Purpose: Execute multiple child agents in sequence until one succeeds.
  • Input: Dict with "input".
  • Behavior: Tries each child agent. If one crashes/fails, moves to next.
  • Typical Use: "Try web search with service A; if unavailable, fallback to service B."
FailingNode
  • Purpose: Intentionally fail. Used to simulate errors during execution.
  • Input: Dict with "input".
  • Output: Always throws an Exception.
  • Typical Use: "Test failover scenarios or resilience paths."
ForkNode
  • Purpose: Split execution into multiple parallel agent branches.
  • Input: Dict with "input" and "previous_outputs".
  • Behavior: Launches multiple child agents simultaneously. Supports sequential (default) or full parallel execution.
  • Options:
  • targets: List of agents to fork.
  • mode: "sequential" or "parallel".
  • Typical Use: "Validate topic and check if a summary is needed simultaneously.
JoinNode
  • Purpose: Wait for multiple forked agents to complete, then merge their outputs.
  • Input: Dict including fork_group_id (forked group name).
  • Behavior: Suspends execution until all required forked agents have completed. Then aggregates their outputs.
  • Typical Use: "Wait for parallel validations to finish before deciding next step.""

๐Ÿ“Š Summary Table

Name Type Core Purpose
BinaryAgent Agent True/False classification
ClassificationAgent Agent Category classification
OpenAIBinaryAgent Agent LLM-backed binary decision
OpenAIClassificationAgent Agent LLM-backed category decision
OpenAIAnswerBuilder Agent Compose detailed answer
DuckDuckGoAgent Agent Perform web search
RouterNode Node Dynamically route next steps
FailoverNode Node Resilient sequential fallback
FailingNode Node Simulate failure
WaitForNode Node Wait for multiple dependencies
ForkNode Node Parallel execution split
JoinNode Node Parallel execution merge

๐Ÿ” Troubleshooting

Common Issues

Problem Solution
"Cannot connect to Redis" Ensure Redis is running: redis-cli ping should return PONG. Start Redis with redis-server if needed.
Agent returns unexpected results Check the agent's prompt in your YAML file. Make sure it's clear and specific. You can also check Redis logs: redis-cli xrevrange orka:memory + - COUNT 5
Binary agents return strings instead of booleans As of latest version, binary agents return "true" or "false" as strings. Update your router's routing_map to use string values: "true": instead of true:
Templating errors in prompts Verify your Jinja2 syntax: {{ previous_outputs.agent_id }} is correct format. Make sure the referenced agent has already executed.
Execution stops unexpectedly Check for errors in Redis logs. Ensure all required agents are defined. Try adding a fallback path with failover nodes.

Debugging Tips

  1. Enable detailed logging:

    python -m orka.orka_cli ./your_config.yml "Your input" --log-to-file --verbose
    
  2. Inspect Redis streams for exact agent outputs:

    redis-cli xrevrange orka:your_agent_id + - COUNT 1
    
  3. Test agents individually using the testing tools in orka.agent_test

  4. Common timeout issues: Increase timeouts for web search or complex reasoning agents in your YAML config.

๐Ÿ“Š Performance & Scalability

OrKa is designed to scale with your needs:

  • Single-server deployment: Handles hundreds of requests per minute
  • Clustered deployment: With Redis Cluster and multiple OrKa instances, can scale to thousands of requests
  • Resource Utilization:
    • Memory: ~100MB base + ~10MB per concurrent request
    • CPU: Minimal, mostly I/O bound
    • Network: Depends on LLM API usage

Optimization tips:

  • Use appropriate timeouts for each agent type
  • Implement caching for repetitive requests
  • For high-volume scenarios, consider Redis Cluster
  • Scale horizontally with multiple OrKa instances behind a load balancer

๐Ÿข Case Studies & Success Stories

Enterprise Knowledge Base Assistant

A Fortune 500 company implemented OrKa to build a knowledge base assistant that:

  • Classifies questions into 20+ categories
  • Routes to appropriate search strategies based on question type
  • Provides transparent reasoning paths for compliance
  • Reduced average response time by 40% compared to monolithic prompt approach

Academic Research Tool

Research teams use OrKa to:

  • Create reproducible literature analysis workflows
  • Document reasoning paths for peer review
  • Chain specialized tools in transparent pipelines
  • Generate research summaries with clear attribution

Content Moderation System

A content platform used OrKa to build a moderation system that:

  • Parallelizes content checks across multiple dimensions
  • Provides clear explanation for moderation decisions
  • Achieves 99.7% agreement with human moderators
  • Scales to handle thousands of submissions per hour

๐Ÿ“š Documentation

๐Ÿค Contributing

We welcome contributions! Please see our CONTRIBUTING.md for guidelines.

๐Ÿ“œ License & Attribution

This project is licensed under the Apache 2.0 License. For more details, refer to the LICENSE file.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

orka_reasoning-0.6.1.tar.gz (145.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

orka_reasoning-0.6.1-py3-none-any.whl (105.3 kB view details)

Uploaded Python 3

File details

Details for the file orka_reasoning-0.6.1.tar.gz.

File metadata

  • Download URL: orka_reasoning-0.6.1.tar.gz
  • Upload date:
  • Size: 145.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.12

File hashes

Hashes for orka_reasoning-0.6.1.tar.gz
Algorithm Hash digest
SHA256 1d32f008b30a8b3e8152dd7e369dca202c3621e533cce2ed552e30d02e876f94
MD5 14d3cdb24a3bca3e47f9dfc59f34bf37
BLAKE2b-256 2781169b6ba9f164263ae4ca43472631e3c3d8c0fa65fdf33a2a4b187eaa42e1

See more details on using hashes here.

File details

Details for the file orka_reasoning-0.6.1-py3-none-any.whl.

File metadata

  • Download URL: orka_reasoning-0.6.1-py3-none-any.whl
  • Upload date:
  • Size: 105.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.11.12

File hashes

Hashes for orka_reasoning-0.6.1-py3-none-any.whl
Algorithm Hash digest
SHA256 7e77d69ad16f3d31b08d694bcafc662c970dac21aaa18f4d95cdad0a86b4675c
MD5 f70ad4a92b97ae8c587653c21735b8a0
BLAKE2b-256 e560f226719509fea8075cfbe95b71a58ea8295be9454f94213dcdd7decae9c6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page