Skip to main content

Modular agent orchestrator for reasoning pipelines

Project description

OrKa - AI Agent Orchestration

OrKa Logo

GitHub Tag PyPI - License

codecov orka-reasoning

PyPiDockerDocumentation

orkacore

Pepy Total Downloads

What OrKa Does

OrKa lets you define AI workflows in YAML files instead of writing complex Python code. You describe what you want - like "search memory, then ask an AI, then save the result" - and OrKa handles the execution.

Think of it as a streamlined, open-source alternative to CrewAI or LangChain, but with a focus on:

  • YAML configuration instead of code
  • Built-in memory that remembers and forgets intelligently
  • Local LLM support for privacy
  • Simple setup with Docker

Basic Example

Instead of writing Python code like this:

# Complex Python orchestration code
memory_results = search_memory(query)
if not memory_results:
    web_results = search_web(query)
    answer = llm.generate(web_results + query)
else:
    answer = llm.generate(memory_results + query)
save_to_memory(query, answer)

You write a YAML file like this:

orchestrator:
  id: simple-qa
  agents: [memory_search, web_search, answer, memory_store]

agents:
  - id: memory_search
    type: memory
    operation: read
    prompt: "Find: {{ input }}"
    
  - id: web_search  
    type: search
    prompt: "Search: {{ input }}"
    
  - id: answer
    type: local_llm
    model: llama3.2
    prompt: "Answer based on: {{ previous_outputs }}"
    
  - id: memory_store
    type: memory
    operation: write
    prompt: "Store: {{ input }} -> {{ previous_outputs.answer }}"

Installation

# Install OrKa
pip install orka-reasoning

# Start Redis (for memory)
orka-start

# Memory TUI
orka memory watch

# Run a workflow
orka run my-workflow.yml "What is machine learning?"

How It Works

1. Agent Types

OrKa provides several agent types you can use in your workflows:

  • memory - Read from or write to persistent memory
  • local_llm - Use local models (Ollama, LM Studio)
  • openai-* - Use OpenAI models
  • search - Web search
  • router - Conditional branching
  • fork/join - Parallel processing
  • loop - Iterative workflows
  • plan_validator - Validate and critique proposed execution paths
  • graph_scout - [BETA] Find best path for workflow execution

2. Memory System

OrKa includes a memory system that:

  • Stores conversations and facts
  • Searches semantically (finds related content, not just exact matches)
  • Automatically forgets old, unimportant information
  • Uses Redis for fast retrieval

3. Workflow Execution

When you run orka run workflow.yml "input", OrKa:

  1. Reads your YAML configuration
  2. Creates the agents you defined
  3. Runs them in the order you specified
  4. Passes outputs between agents
  5. Returns the final result

4. Local LLM Support

OrKa works with local models through:

  • Ollama - ollama pull llama3.2 then use provider: ollama
  • LM Studio - Point to your local API endpoint
  • Any LLM-compatible API

📚 Complete Agent & Node Reference

🎯 NEW: Comprehensive Documentation for Every Agent, Node & Tool →

Detailed documentation for all agent types, control flow nodes, and tools:

  • 🤖 7 LLM Agents - OpenAI, Local LLM, Binary, Classification, Validation, PlanValidator
  • 💾 2 Memory Agents - Reader & Writer with 100x faster HNSW indexing
  • 🔀 6 Control Flow Nodes - Router, Fork/Join, Loop, Failover, GraphScout
  • 🔧 2 Search Tools - DuckDuckGo, RAG

Each with working examples, parameters, best practices, and troubleshooting!


Common Patterns

Memory-First Q&A

# Check memory first, search web if nothing found
agents:
  - id: check_memory
    type: memory
    operation: read

  - id: binary_agent
    type: local_llm
    prompt: |
      Given those memory {{get_agent_response('check_memory')}} and this input {{ input }}
      Is an search on internet required?
      Only answer with 'true' or 'false' 
    
  - id: route_decision
    type: router
    decision_key: 'binary_agent'
    routing_map:
      "true": [answer_from_memory]
      "false": [web_search, answer_from_web]

Parallel Processing

# Analyze sentiment and toxicity simultaneously
agents:
  - id: parallel_analysis
    type: fork
    targets:
      - [sentiment_analyzer]
      - [toxicity_checker]
      
  - id: combine_results
    group: parallel_analysis
    type: join

Iterative Improvement

# Keep improving until quality threshold met
agents:
  - id: improvement_loop
    type: loop
    max_loops: 5
    score_threshold: 0.85
    internal_workflow:
      agents: [analyzer, scorer]

Comparison to Alternatives

Feature OrKa LangChain CrewAI
Configuration YAML files Python code Python code
Memory Built-in with decay External/manual External/manual
Local LLMs First-class support Via adapters Limited
Parallel execution Native fork/join Manual threading Agent-based
Learning Automatic memory management Manual Manual

Quick Start Examples

1. Simple Q&A with Memory

# Copy example
cp examples/simple_memory_preset_demo.yml my-qa.yml

# Run it
orka run my-qa.yml "What is artificial intelligence?"

2. Web Search + Memory

# Copy example  
cp examples/person_routing_with_search.yml web-qa.yml

# Run it
orka run web-qa.yml "Latest news about quantum computing"

3. Local LLM Chat

# Start Ollama
ollama pull llama3.2

# Copy example
cp examples/multi_model_local_llm_evaluation.yml local-chat.yml

# Run it
orka run local-chat.yml "Explain machine learning simply"

Documentation

🌟 Agent & Node Reference Index →

Complete 1-to-1 documentation for every agent, node, and tool with examples, parameters, and best practices.

Core Guides

Getting Help

Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

License

Apache 2.0 License - see LICENSE for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

orka_reasoning-0.9.5.tar.gz (442.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

orka_reasoning-0.9.5-py3-none-any.whl (426.0 kB view details)

Uploaded Python 3

File details

Details for the file orka_reasoning-0.9.5.tar.gz.

File metadata

  • Download URL: orka_reasoning-0.9.5.tar.gz
  • Upload date:
  • Size: 442.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.13

File hashes

Hashes for orka_reasoning-0.9.5.tar.gz
Algorithm Hash digest
SHA256 50444aad25c349c441eb8a76936e636e35c9b66bf14e972cd17294f2f16bcb7e
MD5 7456deab62c2bc37df2ce6f4f13564ce
BLAKE2b-256 d5aceb1e43556b4402727f503e136fb79fb1f3b802eaa7953f1eb99a439edb1c

See more details on using hashes here.

File details

Details for the file orka_reasoning-0.9.5-py3-none-any.whl.

File metadata

  • Download URL: orka_reasoning-0.9.5-py3-none-any.whl
  • Upload date:
  • Size: 426.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.13

File hashes

Hashes for orka_reasoning-0.9.5-py3-none-any.whl
Algorithm Hash digest
SHA256 3a3da731a5067e314bd56ec22052ce51b01d109070624cd1da5d450c01eda400
MD5 4dec6ca61aff07ecfa62239a6528b8c1
BLAKE2b-256 aa3c9f6e9b2a10ca51dc98d8d68f13fd35bb3d37496304f2b047edcb0d3b745f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page