A context-graph framework for building production-ready AI applications.
Project description
KayGraph
A domain-specific language (DSL) for building context-aware AI applications.
๐ New to KayGraph?
Choose your path:
| You are... | Start here |
|---|---|
| ๐ค Human Developer | Follow the 10-minute quickstart below |
| ๐ค AI Coding Agent | Load LLM_CONTEXT_KAYGRAPH_DSL.md first |
| ๐ฏ Task-focused | "I need to build X" โ QUICK_FINDER.md |
| ๐ Exploring all examples | Browse 70 workbooks in 16 categories |
| โ ๏ธ Debugging/stuck | Check Common Patterns & Errors |
What is KayGraph?
KayGraph is an opinionated DSL for expressing business problems as AI agent pipelines. Think of it as building blocks for AI workflows - a 500-line core that provides powerful abstractions without the bloat.
Core Philosophy:
- ๐ฏ DSL-First: Express complex AI workflows declaratively
- ๐ชถ Zero Dependencies: Pure Python standard library (500 lines of core code)
- ๐ง Bring Your Own Tools: Works with any LLM, database, or service
- ๐ฆ Production-Ready: 70 battle-tested examples across 16 categories
Quick Start
Installation
# Using pip
pip install kaygraph
# Or from source
git clone https://github.com/KayOS-AI/KayGraph.git
cd KayGraph
pip install -e .
Your First KayGraph Workflow
from kaygraph import Node, Graph
class AnalyzeNode(Node):
def prep(self, shared):
"""Phase 1: Read from shared context"""
return shared.get("text")
def exec(self, text):
"""Phase 2: Execute logic (LLM call, API, etc.)"""
return f"Analyzed: {text}"
def post(self, shared, prep_res, exec_res):
"""Phase 3: Write results back to shared context"""
shared["result"] = exec_res
return None # End of workflow
# Build and run
analyze = AnalyzeNode()
graph = Graph(analyze)
shared = {"text": "Hello KayGraph!"}
graph.run(shared)
print(shared["result"]) # "Analyzed: Hello KayGraph!"
That's it! Three phases: prep() โ exec() โ post()
For Humans: Learning Path
1. Start with the Basics (10 minutes)
# Try the simplest example
cd workbooks/01-getting-started/kaygraph-hello-world
python main.py
2. Explore by Use Case
Use the task-based finder to jump to what you need:
๐ workbooks/QUICK_FINDER.md - "I need to build..."
- An AI Agent โ Examples + patterns
- A Chatbot โ Chat patterns
- A RAG System โ Retrieval patterns
- Batch Processing โ Data pipeline patterns
- Production API โ Deployment examples
3. Browse All 70 Examples
๐ workbooks/WORKBOOK_INDEX_CONSOLIDATED.md - Complete catalog
16 Categories:
- Getting Started (1)
- Core Patterns (2)
- Batch Processing (5)
- AI Agents (9)
- Workflows (12)
- AI Reasoning (4)
- Chat & Conversation (4)
- Memory Systems (3)
- RAG & Retrieval (1)
- Code Development (2)
- Data & SQL (4)
- Tools Integration (7)
- Production & Monitoring (8)
- UI/UX (4)
- Streaming & Realtime (2)
- Advanced Patterns (2)
For Coding Agents: DSL Reference
๐ค LLM_CONTEXT_KAYGRAPH_DSL.md - Complete DSL specification for AI agents
This document contains everything a coding agent needs to:
- Understand the 3-phase node lifecycle
- Build graphs with proper action routing
- Use all node types (Async, Batch, Parallel, Validated, Metrics)
- Follow production patterns
- Avoid common anti-patterns
For AI Assistants (Claude, GPT-4, etc.):
Load the LLM_CONTEXT_KAYGRAPH_DSL.md file to understand KayGraph's
domain-specific language and generate production-ready code.
Core Concepts (5-Minute Overview)
The 3-Phase Node Lifecycle
Every node follows this pattern:
class MyNode(Node):
def prep(self, shared):
"""
Phase 1: READ from shared store
- Gather data needed for execution
- Access shared context
- Return data for exec()
"""
return shared.get("input_data")
def exec(self, prep_res):
"""
Phase 2: EXECUTE logic (NO shared access!)
- Process data (LLM calls, APIs, etc.)
- Pure function - can be retried
- Return results
"""
return process_data(prep_res)
def post(self, shared, prep_res, exec_res):
"""
Phase 3: WRITE to shared store and route
- Update shared context with results
- Return action string for routing
- Return None for default/end
"""
shared["output"] = exec_res
return "next_action" # or None
Why this matters:
prep()andpost()have context,exec()is pureexec()can be retried independently (resilience!)- Clear separation of concerns
Graph Composition
# Chain nodes with default flow
node1 >> node2 >> node3
# Named actions for branching
decision_node >> ("approve", approval_node)
decision_node >> ("reject", rejection_node)
# Complex workflows
extract >> transform >> ("validate", validator)
validator >> ("success", loader)
validator >> ("failed", error_handler)
Shared Store Pattern
# Simple dictionary for context
shared = {
"user_id": "123",
"input": "Analyze this text",
"history": []
}
# Nodes read and write to it
graph.run(shared)
# Results available after execution
print(shared["analysis_result"])
Key Features
Node Types
| Type | Use Case | Example |
|---|---|---|
Node |
Standard sync operations | API calls, file I/O |
AsyncNode |
I/O-bound async operations | Concurrent API calls |
BatchNode |
Process iterables | Process 1000 records |
ParallelBatchNode |
Concurrent batch processing | Parallel data transforms |
ValidatedNode |
Input/output validation | Production pipelines |
MetricsNode |
Performance tracking | Monitoring, profiling |
Production Features
- โ
Retry Logic: Built-in with
max_retriesandwait - โ
Fallback Handling:
exec_fallback()for graceful degradation - โ Validation: Input/output type checking
- โ Metrics: Execution time, retry counts, success rates
- โ Logging: Comprehensive debug support
- โ Context Managers: Resource cleanup
Example Patterns
Agent Pattern
# Decision-making loop
think >> analyze >> ("use_tool", tool_node)
analyze >> ("respond", response_node)
tool_node >> think # Loop back for reasoning
RAG Pattern
# Offline indexing
extract >> chunk >> embed >> store
# Online retrieval
query >> search >> rerank >> generate
Workflow Pattern
# Human-in-the-loop
process >> review >> ("approve", execute)
review >> ("reject", notify)
review >> ("modify", process) # Loop back
Common Use Cases
| I want to build... | Start here | Combine with |
|---|---|---|
| ChatGPT Clone | chat-memory |
streaming-llm + chat-guardrail |
| Research Assistant | agent |
rag + tool-search + agent-tools |
| Data Pipeline | workflow |
batch + validated-pipeline |
| Multi-Agent System | multi-agent |
supervisor + agent-memory |
| Production API | production-ready-api |
metrics-dashboard + fault-tolerant |
See workbooks/QUICK_FINDER.md for the complete list.
Scaffolding Tool
Generate production-ready boilerplate instantly:
# Generate a basic node
python scripts/kaygraph_scaffold.py node DataProcessor
# Generate an agent
python scripts/kaygraph_scaffold.py agent ResearchBot
# Generate a RAG system
python scripts/kaygraph_scaffold.py rag DocumentQA
# Generate a chat application
python scripts/kaygraph_scaffold.py chat CustomerSupport
# See all templates
python scripts/kaygraph_scaffold.py --help
Each template includes:
- Complete working code
- Documentation with TODOs
- requirements.txt with optional dependencies
- README with quickstart
Documentation
For Developers
- ๐ CLAUDE.md - Development guide for human developers and AI assistants
- ๐ COMMON_PATTERNS_AND_ERRORS.md - Avoid common mistakes and follow best practices
- ๐ docs/ - Architecture, patterns, and best practices
- ๐ CHANGELOG.md - Version history
For AI Coding Agents
- ๐ค LLM_CONTEXT_KAYGRAPH_DSL.md - Complete DSL reference
- ๐ workbooks/WORKBOOK_INDEX_CONSOLIDATED.md - All 70 examples organized
- โ ๏ธ COMMON_PATTERNS_AND_ERRORS.md - Common errors and how to avoid them
For Quick Tasks
- ๐ฏ workbooks/QUICK_FINDER.md - Task-based navigation ("I need to build...")
- ๐ workbooks/guides/LLM_SETUP.md - Set up local LLMs with Ollama
Why KayGraph?
The 500-Line Philosophy
KayGraph's core is intentionally 500 lines. This isn't a limitation - it's a feature.
Why?
- โ You can read and understand the entire framework in one sitting
- โ No hidden magic - just Python classes and composition
- โ Easy to debug - it's just your code
- โ No vendor lock-in - bring your own LLM, database, tools
- โ Production-ready patterns without framework bloat
When humans can specify the graph, AI agents can automate it.
Zero Dependencies
The core framework has zero external dependencies. All examples that use LLMs, databases, or other services provide implementation templates - you bring your own tools.
This means:
- ๐ชถ Tiny install footprint
- ๐ง Total control over your stack
- ๐ฏ Only pay for what you use
- ๐ No dependency hell
Project Structure
KayGraph/
โโโ kaygraph/ # Core framework (500 lines!)
โ โโโ __init__.py # All abstractions in one file
โ
โโโ workbooks/ # 70 production examples
โ โโโ 01-getting-started/ # Start here
โ โโโ 04-ai-agents/ # Agent patterns
โ โโโ 09-rag-retrieval/ # RAG systems
โ โโโ ... # 13 more categories
โ
โโโ scripts/ # Scaffolding tools
โ โโโ kaygraph_scaffold.py # Generate boilerplate
โ
โโโ docs/ # Comprehensive guides
โโโ tests/ # Unit tests
โ
โโโ LLM_CONTEXT_KAYGRAPH_DSL.md # For coding agents
โโโ CLAUDE.md # For developers
โโโ README.md # You are here
Testing & Quality
All 70 workbooks are validated for:
- โ Valid structure (README.md + main.py)
- โ Valid Python syntax
- โ All imports resolve
- โ 100% pass rate
Run validation yourself:
python tasks/workbook-testing/validate_all_workbooks.py
Contributing
We welcome contributions! Please see our Contributing Guide for details.
Quick Contribution Ideas
- ๐ Fix bugs in examples
- ๐ Improve documentation
- ๐ก Add new workbook examples
- ๐งช Expand test coverage
- ๐จ Enhance scaffolding templates
Community & Support
- ๐ Bug Reports: GitHub Issues
- ๐ฌ Discussions: GitHub Discussions
- ๐ง Email: [Your contact email]
License
MIT License - see LICENSE for details.
Quick Reference Card
# Node Lifecycle
class MyNode(Node):
def prep(self, shared): # 1. Read context
return data
def exec(self, prep_res): # 2. Execute (pure!)
return result
def post(self, shared, prep_res, exec_res): # 3. Write & route
shared["result"] = exec_res
return "action" # or None
# Graph Building
node1 >> node2 # Default flow
node1 >> ("action", node2) # Named action
node1 >> node2 >> node3 # Chain
# Running
graph = Graph(start_node)
shared = {"input": "data"}
graph.run(shared)
print(shared["output"])
Built with โค๏ธ by the KayOS Team
Ready to build? Start with workbooks/01-getting-started/kaygraph-hello-world
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file kaygraph-0.3.1.tar.gz.
File metadata
- Download URL: kaygraph-0.3.1.tar.gz
- Upload date:
- Size: 1.9 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fd0f91e56047813f5af4f0e143af8bbb5804eab933a09b6ed5f634a3828f0d51
|
|
| MD5 |
94c45d211e6a5056fd88bec7a69de48c
|
|
| BLAKE2b-256 |
371aaae8bc7b01d2534ae6986f143af2075329ec01633a15df11b9f2ce25e4fd
|
File details
Details for the file kaygraph-0.3.1-py3-none-any.whl.
File metadata
- Download URL: kaygraph-0.3.1-py3-none-any.whl
- Upload date:
- Size: 98.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f39ccdfb737c2d5e3162045fb8dd4bc63e346987eaf052859ae6eb9d45cfa81f
|
|
| MD5 |
624712c144e5695dd67901f571b9ee36
|
|
| BLAKE2b-256 |
dd310a15c9eb3ed8df141df58cabc3b99b48f5cbeaaabea9f347cda5c8a003e7
|