Skip to main content

Modular agent orchestrator for reasoning pipelines

Project description

OrKa - AI Agent Orchestration

OrKa Logo

GitHub Tag PyPI - License

codecov orka-reasoning

PyPiDockerDocumentation

orkacore

Pepy Total Downloads

What OrKa Does

OrKa lets you define AI workflows in YAML files instead of writing complex Python code. You describe what you want - like "search memory, then ask an AI, then save the result" - and OrKa handles the execution.

Think of it as a streamlined, open-source alternative to CrewAI or LangChain, but with a focus on:

  • YAML configuration instead of code
  • Built-in memory that remembers and forgets intelligently
  • Local LLM support for privacy
  • Simple setup with Docker

Basic Example

Instead of writing Python code like this:

# Complex Python orchestration code
memory_results = search_memory(query)
if not memory_results:
    web_results = search_web(query)
    answer = llm.generate(web_results + query)
else:
    answer = llm.generate(memory_results + query)
save_to_memory(query, answer)

You write a YAML file like this:

orchestrator:
  id: simple-qa
  agents: [memory_search, web_search, answer, memory_store]

agents:
  - id: memory_search
    type: memory
    operation: read
    prompt: "Find: {{ input }}"
    
  - id: web_search  
    type: search
    prompt: "Search: {{ input }}"
    
  - id: answer
    type: local_llm
    model: llama3.2
    prompt: "Answer based on: {{ previous_outputs }}"
    
  - id: memory_store
    type: memory
    operation: write
    prompt: "Store: {{ input }} -> {{ previous_outputs.answer }}"

Installation

# Install OrKa
pip install orka-reasoning

# Start RedisStack (for memory)
# Automatically tries native RedisStack first, then Docker
orka-start

# Memory TUI
orka memory watch

# Run a workflow
orka run my-workflow.yml "What is machine learning?"

RedisStack Setup Options

OrKa needs RedisStack for its memory system. When you run orka-start, it automatically:

  1. Tries native RedisStack (if installed on your system)
  2. Falls back to Docker (if Docker is running)
  3. Shows install instructions (if neither is available)

Choose your preferred method:

  • Docker (easiest): Just have Docker running, orka-start handles everything
  • Native (no Docker needed):
    • macOS: brew install redis-stack
    • Ubuntu: sudo apt install redis-stack-server
    • Windows: Download from redis.io

How It Works

1. Agent Types

OrKa provides several agent types you can use in your workflows:

  • memory - Read from or write to persistent memory
  • local_llm - Use local models (Ollama, LM Studio)
  • openai-* - Use OpenAI models
  • search - Web search
  • router - Conditional branching
  • fork/join - Parallel processing
  • loop - Iterative workflows
  • plan_validator - Validate and critique proposed execution paths
  • graph_scout - [BETA] Find best path for workflow execution

2. Memory System

OrKa includes a memory system that:

  • Stores conversations and facts
  • Searches semantically (finds related content, not just exact matches)
  • Automatically forgets old, unimportant information
  • Uses Redis for fast retrieval

3. Workflow Execution

When you run orka run workflow.yml "input", OrKa:

  1. Reads your YAML configuration
  2. Creates the agents you defined
  3. Runs them in the order you specified
  4. Passes outputs between agents
  5. Returns the final result

4. Local LLM Support

OrKa works with local models through:

  • Ollama - ollama pull llama3.2 then use provider: ollama
  • LM Studio - Point to your local API endpoint
  • Any LLM-compatible API

📚 Complete Agent & Node Reference

🎯 NEW: Comprehensive Documentation for Every Agent, Node & Tool →

Detailed documentation for all agent types, control flow nodes, and tools:

  • 🤖 7 LLM Agents - OpenAI, Local LLM, Binary, Classification, Validation, PlanValidator
  • 💾 2 Memory Agents - Reader & Writer with 100x faster HNSW indexing
  • 🔀 6 Control Flow Nodes - Router, Fork/Join, Loop, Failover, GraphScout
  • 🔧 2 Search Tools - DuckDuckGo, RAG

Each with working examples, parameters, best practices, and troubleshooting!


Common Patterns

Memory-First Q&A

# Check memory first, search web if nothing found
agents:
  - id: check_memory
    type: memory
    operation: read

  - id: binary_agent
    type: local_llm
    prompt: |
      Given those memory {{get_agent_response('check_memory')}} and this input {{ input }}
      Is an search on internet required?
      Only answer with 'true' or 'false' 
    
  - id: route_decision
    type: router
    decision_key: 'binary_agent'
    routing_map:
      "true": [answer_from_memory]
      "false": [web_search, answer_from_web]

Parallel Processing

# Analyze sentiment and toxicity simultaneously
agents:
  - id: parallel_analysis
    type: fork
    targets:
      - [sentiment_analyzer]
      - [toxicity_checker]
      
  - id: combine_results
    group: parallel_analysis
    type: join

Iterative Improvement

# Keep improving until quality threshold met
agents:
  - id: improvement_loop
    type: loop
    max_loops: 5
    score_threshold: 0.85
    internal_workflow:
      agents: [analyzer, scorer]

Comparison to Alternatives

Feature OrKa LangChain CrewAI
Configuration YAML files Python code Python code
Memory Built-in with decay External/manual External/manual
Local LLMs First-class support Via adapters Limited
Parallel execution Native fork/join Manual threading Agent-based
Learning Automatic memory management Manual Manual

Quick Start Examples

1. Simple Q&A with Memory

# Copy example
cp examples/simple_memory_preset_demo.yml my-qa.yml

# Run it
orka run my-qa.yml "What is artificial intelligence?"

2. Web Search + Memory

# Copy example  
cp examples/person_routing_with_search.yml web-qa.yml

# Run it
orka run web-qa.yml "Latest news about quantum computing"

3. Local LLM Chat

# Start Ollama
ollama pull llama3.2

# Copy example
cp examples/multi_model_local_llm_evaluation.yml local-chat.yml

# Run it
orka run local-chat.yml "Explain machine learning simply"

Documentation

🌟 Agent & Node Reference Index →

Complete 1-to-1 documentation for every agent, node, and tool with examples, parameters, and best practices.

Core Guides

Getting Help

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

License

Apache 2.0 License - see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

orka_reasoning-0.9.6.tar.gz (504.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

orka_reasoning-0.9.6-py3-none-any.whl (476.1 kB view details)

Uploaded Python 3

File details

Details for the file orka_reasoning-0.9.6.tar.gz.

File metadata

  • Download URL: orka_reasoning-0.9.6.tar.gz
  • Upload date:
  • Size: 504.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for orka_reasoning-0.9.6.tar.gz
Algorithm Hash digest
SHA256 dccbd845d8a83d21909c7395e55348371bcf5bc98f2b881aa406274df0334cc0
MD5 4a6a43b012d90a846858476cfb004342
BLAKE2b-256 f8cc2bbfabbb92a341759e4dd3ca5825a2c568adeab7f2ee4c26fa5bb9f4a5c6

See more details on using hashes here.

File details

Details for the file orka_reasoning-0.9.6-py3-none-any.whl.

File metadata

  • Download URL: orka_reasoning-0.9.6-py3-none-any.whl
  • Upload date:
  • Size: 476.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for orka_reasoning-0.9.6-py3-none-any.whl
Algorithm Hash digest
SHA256 e289b058efda908ad9280355b50ffb47b45f5f41550ddbfea68aa75e17a78f98
MD5 3a2d217feb8772eef425b7e43863ad72
BLAKE2b-256 c28cd49219b4c379a08c76021c6bd42e3cc588f2959172722d8aea522705a187

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page