Modular agent orchestrator for reasoning pipelines
Project description
OrKa - AI Agent Orchestration
What OrKa Does
OrKa lets you define AI workflows in YAML files instead of writing complex Python code. You describe what you want - like "search memory, then ask an AI, then save the result" - and OrKa handles the execution.
Think of it as a streamlined, open-source alternative to CrewAI or LangChain, but with a focus on:
- YAML configuration instead of code
- Built-in memory that remembers and forgets intelligently
- Local LLM support for privacy
- Simple setup with Docker
Basic Example
Instead of writing Python code like this:
# Complex Python orchestration code
memory_results = search_memory(query)
if not memory_results:
web_results = search_web(query)
answer = llm.generate(web_results + query)
else:
answer = llm.generate(memory_results + query)
save_to_memory(query, answer)
You write a YAML file like this:
orchestrator:
id: simple-qa
agents: [memory_search, web_search, answer, memory_store]
agents:
- id: memory_search
type: memory
operation: read
prompt: "Find: {{ input }}"
- id: web_search
type: search
prompt: "Search: {{ input }}"
- id: answer
type: local_llm
model: llama3.2
prompt: "Answer based on: {{ previous_outputs }}"
- id: memory_store
type: memory
operation: write
prompt: "Store: {{ input }} -> {{ previous_outputs.answer }}"
Installation
# Install OrKa
pip install orka-reasoning
# Start Redis (for memory)
orka-start
# Memory TUI
orka memory watch
# Run a workflow
orka run my-workflow.yml "What is machine learning?"
How It Works
1. Agent Types
OrKa provides several agent types you can use in your workflows:
memory- Read from or write to persistent memorylocal_llm- Use local models (Ollama, LM Studio)openai-*- Use OpenAI modelssearch- Web searchrouter- Conditional branchingfork/join- Parallel processingloop- Iterative workflowsGraphScout- [BETA] Find best path for workflow execution
2. Memory System
OrKa includes a memory system that:
- Stores conversations and facts
- Searches semantically (finds related content, not just exact matches)
- Automatically forgets old, unimportant information
- Uses Redis for fast retrieval
3. Workflow Execution
When you run orka run workflow.yml "input", OrKa:
- Reads your YAML configuration
- Creates the agents you defined
- Runs them in the order you specified
- Passes outputs between agents
- Returns the final result
4. Local LLM Support
OrKa works with local models through:
- Ollama -
ollama pull llama3.2then useprovider: ollama - LM Studio - Point to your local API endpoint
- Any LLM-compatible API
Common Patterns
Memory-First Q&A
# Check memory first, search web if nothing found
agents:
- id: check_memory
type: memory
operation: read
- id: binary_agent
type: local_llm
prompt: |
Given those memory {{get_agent_response('check_memory')}} and this input {{ input }}
Is an search on internet required?
Only answer with 'true' or 'false'
- id: route_decision
type: router
decision_key: 'binary_agent'
routing_map:
"true": [answer_from_memory]
"false": [web_search, answer_from_web]
Parallel Processing
# Analyze sentiment and toxicity simultaneously
agents:
- id: parallel_analysis
type: fork
targets:
- [sentiment_analyzer]
- [toxicity_checker]
- id: combine_results
group: parallel_analysis
type: join
Iterative Improvement
# Keep improving until quality threshold met
agents:
- id: improvement_loop
type: loop
max_loops: 5
score_threshold: 0.85
internal_workflow:
agents: [analyzer, scorer]
Comparison to Alternatives
| Feature | OrKa | LangChain | CrewAI |
|---|---|---|---|
| Configuration | YAML files | Python code | Python code |
| Memory | Built-in with decay | External/manual | External/manual |
| Local LLMs | First-class support | Via adapters | Limited |
| Parallel execution | Native fork/join | Manual threading | Agent-based |
| Learning | Automatic memory management | Manual | Manual |
Quick Start Examples
1. Simple Q&A with Memory
# Copy example
cp examples/simple_memory_preset_demo.yml my-qa.yml
# Run it
orka run my-qa.yml "What is artificial intelligence?"
2. Web Search + Memory
# Copy example
cp examples/person_routing_with_search.yml web-qa.yml
# Run it
orka run web-qa.yml "Latest news about quantum computing"
3. Local LLM Chat
# Start Ollama
ollama pull llama3.2
# Copy example
cp examples/multi_model_local_llm_evaluation.yml local-chat.yml
# Run it
orka run local-chat.yml "Explain machine learning simply"
Documentation
- Getting Started Guide - Detailed setup and first workflows
- Agent Types - All available agent types and configurations
- Memory System - How memory works and configuration
- YAML Configuration - Complete YAML reference
- Examples - 15+ ready-to-use workflow templates
Getting Help
- GitHub Issues - Bug reports and feature requests
- Documentation - Full documentation
- Examples - Working examples you can copy and modify
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
License
Apache 2.0 License - see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file orka_reasoning-0.9.4.tar.gz.
File metadata
- Download URL: orka_reasoning-0.9.4.tar.gz
- Upload date:
- Size: 440.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
edd30328a83267d1bc9191d08b7b9e9b515e2cc55328d2f677d2026084e02c8d
|
|
| MD5 |
c3fd4f442dacc3960eeb35ff20301d0d
|
|
| BLAKE2b-256 |
d3c2d02ebed33447b72117c86b6cc14df22c0b43296690142236a69ace7e9f7d
|
File details
Details for the file orka_reasoning-0.9.4-py3-none-any.whl.
File metadata
- Download URL: orka_reasoning-0.9.4-py3-none-any.whl
- Upload date:
- Size: 422.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
25a27cf86d804f0cc1b782b31ff5736ce0af676e7cd59e29630aeda54e18d396
|
|
| MD5 |
617b56fd6b3f4a88f05a01174e1e17c5
|
|
| BLAKE2b-256 |
20e86c935a70edee9a9f2c7170e425c9b3a2d52996e27c7dc5a91895f806a350
|