Modular agent orchestrator for reasoning pipelines
Project description
What OrKa Does
OrKa is a local first solution that lets you define AI workflows in YAML files instead of writing complex Python code. You describe what you want - like "search memory, then ask an AI, then save the result" - and OrKa handles the execution.
Think of it as a streamlined, open-source alternative to CrewAI or LangChain, but with a focus on:
- Local LLM Vendor agnostic
- YAML configuration instead of code
- Built-in memory that remembers and forgets intelligently
- Local LLM support for privacy
- Simple setup with Docker
Basic Example
Instead of writing Python code like this:
# Complex Python orchestration code
memory_results = search_memory(query)
if not memory_results:
web_results = search_web(query)
answer = llm.generate(web_results + query)
else:
answer = llm.generate(memory_results + query)
save_to_memory(query, answer)
You write a YAML file like this:
orchestrator:
id: simple-qa
agents: [memory_search, web_search, answer, memory_store]
agents:
- id: memory_search
type: memory
operation: read
prompt: "Find: {{ input }}"
- id: web_search
type: search
prompt: "Search: {{ input }}"
- id: answer
type: local_llm
model: llama3.2:3b
prompt: "Answer based on: {{ previous_outputs }}"
- id: memory_store
type: memory
operation: write
prompt: "Store: {{ input }} -> {{ previous_outputs.answer }}"
Installation
# Install OrKa
pip install orka-reasoning
# Start RedisStack + Backend + UI
# Automatically tries native RedisStack first, then Docker
# UI available at http://localhost:8080
orka-start
# Memory TUI
orka memory watch
# Run a workflow
orka run my-workflow.yml "What is machine learning?"
Visual Workflow Builder (OrKa UI)
Don't want to write YAML by hand? Use OrKa UI - a drag-and-drop visual editor:
Automatic Start (Recommended)
# UI automatically starts with orka-start
orka-start
# Access at http://localhost:8080
Manual Start (Alternative)
# Pull and run the UI manually
docker pull marcosomma/orka-ui:latest
docker run -d -p 8080:80 --name orka-ui \
-e VITE_API_URL_LOCAL=http://localhost:8000/api/run@dist \
marcosomma/orka-ui:latest
# Access at http://localhost:8080
Configuration
# Skip UI (Redis + Backend only)
export ORKA_DISABLE_UI=true
orka-start
# Use cached Docker image (faster startup)
export ORKA_UI_SKIP_PULL=true
orka-start
🆕 JSON Input Support
OrKa now supports JSON input for advanced workflows and structured use cases!
Why use JSON input?
- Pass complex data (objects, arrays, clinical records, etc.) as input to your workflow.
- Enable dynamic prompts and agents that access specific input fields via
{{ input.field }}. - Perfect for use cases like: medical assistants, document automation, multi-step workflows with structured data.
How to use
- Prepare a JSON file, e.g.
input.json:{ "patient": { "name": "Fido", "species": "dog", "symptoms": ["vomiting", "lethargy"], "age": 7 }, "history": "No previous major illnesses." }
- Pass the file as input to the workflow using the
--json-inputflag before theruncommand:orka --json-input run my-workflow.yml input.json
Or pass inline JSON:orka --json-input run my-workflow.yml '{"foo": 123, "bar": "baz"}'
- In your YAML, access fields using Jinja2 syntax:
prompt: "Patient: {{ input.patient.name }}, Symptoms: {{ input.patient.symptoms }}"
Note: If you use the --json-input flag, the plain text input is ignored and only the JSON is used.
For real-world examples, see the examples/ folder and the documentation.
Features:
- 🎨 Drag-and-drop workflow builder
- 🔧 Visual node configuration
- 📤 One-click YAML export
- 🚀 Built-in workflow execution
- 📚 Example workflow library
📖 Read the full OrKa UI documentation →
What orka-start Provides
When you run orka-start, it automatically sets up:
- RedisStack (memory backend) - tries native first, then Docker
- OrKa Backend API (port 8000) - workflow execution engine
- OrKa UI (port 8080) - visual workflow builder (if Docker available)
RedisStack Setup:
- Tries native RedisStack (if installed on your system)
- Falls back to Docker (if Docker is running)
- Shows install instructions (if neither is available)
Choose your preferred method:
- Docker (easiest): Just have Docker running,
orka-starthandles everything - Native (no Docker needed):
- macOS:
brew install redis-stack - Ubuntu:
sudo apt install redis-stack-server - Windows: Download from redis.io
- macOS:
How It Works
1. Agent Types
OrKa provides several agent types you can use in your workflows:
memory- Read from or write to persistent memorylocal_llm- Use local models (Ollama, LM Studio)openai-*- Use OpenAI modelssearch- Web searchrouter- Conditional branchingfork/join- Parallel processingloop- Iterative workflowsplan_validator- Validate and critique proposed execution pathsgraph_scout- [BETA] Find best path for workflow execution
2. Memory System
OrKa includes a memory system that:
- Stores conversations and facts
- Searches semantically (finds related content, not just exact matches)
- Automatically forgets old, unimportant information
- Uses Redis for fast retrieval
3. Workflow Execution
When you run orka run workflow.yml "input", OrKa:
- Reads your YAML configuration
- Creates the agents you defined
- Runs them in the order you specified
- Passes outputs between agents
- Returns the final result
4. Local LLM Support
OrKa works with local models through:
- Ollama -
ollama pull llama3.2then useprovider: lm_studio - LM Studio - Point to your local API endpoint
- Any LLM-compatible API
📚 Complete Agent & Node Reference
🎯 NEW: Comprehensive Documentation for Every Agent, Node & Tool →
Detailed documentation for all agent types, control flow nodes, and tools:
- 🤖 7 LLM Agents - OpenAI, Local LLM, Binary, Classification, Validation, PlanValidator
- 💾 2 Memory Agents - Reader & Writer with 100x faster HNSW indexing
- 🔀 6 Control Flow Nodes - Router, Fork/Join, Loop, Failover, GraphScout
- 🔧 2 Search Tools - DuckDuckGo, RAG
Each with working examples, parameters, best practices, and troubleshooting!
Common Patterns
Memory-First Q&A
# Check memory first, search web if nothing found
agents:
- id: check_memory
type: memory
operation: read
- id: binary_agent
type: local_llm
prompt: |
Given those memory {{get_agent_response('check_memory')}} and this input {{ input }}
Is an search on internet required?
Only answer with 'true' or 'false'
- id: route_decision
type: router
decision_key: 'binary_agent'
routing_map:
"true": [answer_from_memory]
"false": [web_search, answer_from_web]
Parallel Processing
# Analyze sentiment and toxicity simultaneously
agents:
- id: parallel_analysis
type: fork
targets:
- [sentiment_analyzer]
- [toxicity_checker]
- id: combine_results
group: parallel_analysis
type: join
Iterative Improvement
# Keep improving until quality threshold met
agents:
- id: improvement_loop
type: loop
max_loops: 5
score_threshold: 0.85
internal_workflow:
agents: [analyzer, scorer]
Comparison to Alternatives
| Feature | OrKa | LangChain | CrewAI |
|---|---|---|---|
| Configuration | YAML files | Python code | Python code |
| Memory | Built-in with decay | External/manual | External/manual |
| Local LLMs | First-class support | Via adapters | Limited |
| Parallel execution | Native fork/join | Manual threading | Agent-based |
| Learning | Automatic memory management | Manual | Manual |
Quick Start Examples
1. Simple Q&A with Memory
# Copy example
cp examples/simple_memory_preset_demo.yml my-qa.yml
# Run it
orka run my-qa.yml "What is artificial intelligence?"
2. Web Search + Memory
# Copy example
cp examples/person_routing_with_search.yml web-qa.yml
# Run it
orka run web-qa.yml "Latest news about quantum computing"
3. Local LLM Chat
# Start Ollama
ollama pull llama3.2
# Copy example
cp examples/multi_model_local_llm_evaluation.yml local-chat.yml
# Run it
orka run local-chat.yml "Explain machine learning simply"
Documentation
📚 Documentation Index → - Start Here!
Complete documentation hub with organized guides, tutorials, and references for all OrKa features.
Quick links:
- 📘 Quickstart - Get running in 5 minutes
- 🎯 Agent & Node Reference - Every agent, node & tool documented
- 🧠 Memory System - Intelligent memory configuration
- ⚙️ YAML Configuration - Complete workflow reference
- 🧭 GraphScout Agent - Dynamic routing system
- 📋 Examples - 50+ ready-to-use workflow templates
Getting Help
- GitHub Issues - Bug reports and feature requests
- Documentation - Full documentation
- Examples - Working examples you can copy and modify
Contributing
We welcome contributions! See CONTRIBUTING.md for guidelines.
License
Apache 2.0 License - see LICENSE for details.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file orka_reasoning-0.9.13.tar.gz.
File metadata
- Download URL: orka_reasoning-0.9.13.tar.gz
- Upload date:
- Size: 574.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
40d19f4eb66bd2913814b35532b6e77c03a0275d32d78f3fe99073b0393f3159
|
|
| MD5 |
6d05d4d041ef2bc770ccff8dbda9bb9d
|
|
| BLAKE2b-256 |
969a46e78f4aaee98ff01fdee9e9e4b43772666e3a98e5c8e461541a9c4a989c
|
File details
Details for the file orka_reasoning-0.9.13-py3-none-any.whl.
File metadata
- Download URL: orka_reasoning-0.9.13-py3-none-any.whl
- Upload date:
- Size: 600.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6c9e59c58a1374c1310f582e5ee0762717523889166f1b543c509b0036070889
|
|
| MD5 |
3e26197a1c19e66b4f26e9cf45891cc2
|
|
| BLAKE2b-256 |
1503a70c73d40b345508b6ca78137cf6fe7947da368cb98ddbd5870140bf0d61
|