Skip to main content

Modular agent orchestrator for reasoning pipelines

Project description

OrKa - AI Agent Orchestration

OrKa Logo

GitHub Tag PyPI - License

codecov orka-reasoning

PyPiDockerDocumentation

orkacore

Pepy Total Downloads

What OrKa Does

OrKa lets you define AI workflows in YAML files instead of writing complex Python code. You describe what you want - like "search memory, then ask an AI, then save the result" - and OrKa handles the execution.

Think of it as a streamlined, open-source alternative to CrewAI or LangChain, but with a focus on:

  • YAML configuration instead of code
  • Built-in memory that remembers and forgets intelligently
  • Local LLM support for privacy
  • Simple setup with Docker

Basic Example

Instead of writing Python code like this:

# Complex Python orchestration code
memory_results = search_memory(query)
if not memory_results:
    web_results = search_web(query)
    answer = llm.generate(web_results + query)
else:
    answer = llm.generate(memory_results + query)
save_to_memory(query, answer)

You write a YAML file like this:

orchestrator:
  id: simple-qa
  agents: [memory_search, web_search, answer, memory_store]

agents:
  - id: memory_search
    type: memory
    operation: read
    prompt: "Find: {{ input }}"
    
  - id: web_search  
    type: search
    prompt: "Search: {{ input }}"
    
  - id: answer
    type: local_llm
    model: llama3.2:3b
    prompt: "Answer based on: {{ previous_outputs }}"
    
  - id: memory_store
    type: memory
    operation: write
    prompt: "Store: {{ input }} -> {{ previous_outputs.answer }}"

Installation

# Install OrKa
pip install orka-reasoning

# Start RedisStack + Backend + UI
# Automatically tries native RedisStack first, then Docker
# UI available at http://localhost:8080
orka-start

# Memory TUI
orka memory watch

# Run a workflow
orka run my-workflow.yml "What is machine learning?"

Visual Workflow Builder (OrKa UI)

Don't want to write YAML by hand? Use OrKa UI - a drag-and-drop visual editor:

Automatic Start (Recommended)

# UI automatically starts with orka-start
orka-start

# Access at http://localhost:8080

Manual Start (Alternative)

# Pull and run the UI manually
docker pull marcosomma/orka-ui:latest
docker run -d -p 8080:80 --name orka-ui \
  -e VITE_API_URL_LOCAL=http://localhost:8000/api/run@dist \
  marcosomma/orka-ui:latest

# Access at http://localhost:8080

Configuration

# Skip UI (Redis + Backend only)
export ORKA_DISABLE_UI=true
orka-start

# Use cached Docker image (faster startup)
export ORKA_UI_SKIP_PULL=true
orka-start

Features:

  • 🎨 Drag-and-drop workflow builder
  • 🔧 Visual node configuration
  • 📤 One-click YAML export
  • 🚀 Built-in workflow execution
  • 📚 Example workflow library

📖 Read the full OrKa UI documentation →

What orka-start Provides

When you run orka-start, it automatically sets up:

  1. RedisStack (memory backend) - tries native first, then Docker
  2. OrKa Backend API (port 8000) - workflow execution engine
  3. OrKa UI (port 8080) - visual workflow builder (if Docker available)

RedisStack Setup:

  • Tries native RedisStack (if installed on your system)
  • Falls back to Docker (if Docker is running)
  • Shows install instructions (if neither is available)

Choose your preferred method:

  • Docker (easiest): Just have Docker running, orka-start handles everything
  • Native (no Docker needed):
    • macOS: brew install redis-stack
    • Ubuntu: sudo apt install redis-stack-server
    • Windows: Download from redis.io

How It Works

1. Agent Types

OrKa provides several agent types you can use in your workflows:

  • memory - Read from or write to persistent memory
  • local_llm - Use local models (Ollama, LM Studio)
  • openai-* - Use OpenAI models
  • search - Web search
  • router - Conditional branching
  • fork/join - Parallel processing
  • loop - Iterative workflows
  • plan_validator - Validate and critique proposed execution paths
  • graph_scout - [BETA] Find best path for workflow execution

2. Memory System

OrKa includes a memory system that:

  • Stores conversations and facts
  • Searches semantically (finds related content, not just exact matches)
  • Automatically forgets old, unimportant information
  • Uses Redis for fast retrieval

3. Workflow Execution

When you run orka run workflow.yml "input", OrKa:

  1. Reads your YAML configuration
  2. Creates the agents you defined
  3. Runs them in the order you specified
  4. Passes outputs between agents
  5. Returns the final result

4. Local LLM Support

OrKa works with local models through:

  • Ollama - ollama pull llama3.2 then use provider: ollama
  • LM Studio - Point to your local API endpoint
  • Any LLM-compatible API

📚 Complete Agent & Node Reference

🎯 NEW: Comprehensive Documentation for Every Agent, Node & Tool →

Detailed documentation for all agent types, control flow nodes, and tools:

  • 🤖 7 LLM Agents - OpenAI, Local LLM, Binary, Classification, Validation, PlanValidator
  • 💾 2 Memory Agents - Reader & Writer with 100x faster HNSW indexing
  • 🔀 6 Control Flow Nodes - Router, Fork/Join, Loop, Failover, GraphScout
  • 🔧 2 Search Tools - DuckDuckGo, RAG

Each with working examples, parameters, best practices, and troubleshooting!


Common Patterns

Memory-First Q&A

# Check memory first, search web if nothing found
agents:
  - id: check_memory
    type: memory
    operation: read

  - id: binary_agent
    type: local_llm
    prompt: |
      Given those memory {{get_agent_response('check_memory')}} and this input {{ input }}
      Is an search on internet required?
      Only answer with 'true' or 'false' 
    
  - id: route_decision
    type: router
    decision_key: 'binary_agent'
    routing_map:
      "true": [answer_from_memory]
      "false": [web_search, answer_from_web]

Parallel Processing

# Analyze sentiment and toxicity simultaneously
agents:
  - id: parallel_analysis
    type: fork
    targets:
      - [sentiment_analyzer]
      - [toxicity_checker]
      
  - id: combine_results
    group: parallel_analysis
    type: join

Iterative Improvement

# Keep improving until quality threshold met
agents:
  - id: improvement_loop
    type: loop
    max_loops: 5
    score_threshold: 0.85
    internal_workflow:
      agents: [analyzer, scorer]

Comparison to Alternatives

Feature OrKa LangChain CrewAI
Configuration YAML files Python code Python code
Memory Built-in with decay External/manual External/manual
Local LLMs First-class support Via adapters Limited
Parallel execution Native fork/join Manual threading Agent-based
Learning Automatic memory management Manual Manual

Quick Start Examples

1. Simple Q&A with Memory

# Copy example
cp examples/simple_memory_preset_demo.yml my-qa.yml

# Run it
orka run my-qa.yml "What is artificial intelligence?"

2. Web Search + Memory

# Copy example  
cp examples/person_routing_with_search.yml web-qa.yml

# Run it
orka run web-qa.yml "Latest news about quantum computing"

3. Local LLM Chat

# Start Ollama
ollama pull llama3.2

# Copy example
cp examples/multi_model_local_llm_evaluation.yml local-chat.yml

# Run it
orka run local-chat.yml "Explain machine learning simply"

Documentation

📚 Documentation Index → - Start Here!

Complete documentation hub with organized guides, tutorials, and references for all OrKa features.

Quick links:

Getting Help

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

License

Apache 2.0 License - see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

orka_reasoning-0.9.7.tar.gz (511.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

orka_reasoning-0.9.7-py3-none-any.whl (482.2 kB view details)

Uploaded Python 3

File details

Details for the file orka_reasoning-0.9.7.tar.gz.

File metadata

  • Download URL: orka_reasoning-0.9.7.tar.gz
  • Upload date:
  • Size: 511.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for orka_reasoning-0.9.7.tar.gz
Algorithm Hash digest
SHA256 7db60829e5a1d2e5662d657ef68ea70ae1215101837245d8a75720e6ee279ec3
MD5 7b6bce205d5ad49818a0edfd3e61532a
BLAKE2b-256 850616e7473aed621d4acf5f4dce9f380e0de078f7b23d7ba8d56a65e4aa3823

See more details on using hashes here.

File details

Details for the file orka_reasoning-0.9.7-py3-none-any.whl.

File metadata

  • Download URL: orka_reasoning-0.9.7-py3-none-any.whl
  • Upload date:
  • Size: 482.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for orka_reasoning-0.9.7-py3-none-any.whl
Algorithm Hash digest
SHA256 78ed8cf97c66b0fe02511ca4e1079fd6928aca7752d773889f2d3ece04984a66
MD5 9f4d4e49408a25b0106688c2045cb597
BLAKE2b-256 838e3e903f558d90eb57e98553ccd77f7967c251af70695dbad2c9232a9a7d1c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page