Reusable infrastructure for building AI agents with Redis
Project description
Redis Agent Kit
Production-ready agent infrastructure with Redis.
RAK separates agent execution from your API, giving you durable tasks that survive failures, workers that scale independently, and visibility into every step.
Why RAK?
| Problem | RAK Solution |
|---|---|
| Agent failures are silent and unrecoverable | Persistent task state with automatic retry |
| Scaling agents means scaling your whole API | Decouple APIs from agent workers — scale each independently |
| No visibility into long-running agent work | Real-time progress tracking and status |
| Agent needs human input mid-execution | Pause, collect input, resume seamlessly |
| Agents don't remember or learn | Use conversation history and long-term memory |
Quick Start
pip install redis-agent-kit[all]
1. Write your agent in any framework — a simple async function:
# agent.py
async def my_agent(task_id, thread_id, message, context):
# Use any framework: OpenAI, LangChain, LangGraph, etc.
return {"answer": f"Processed: {message}"}
2. Wrap it with AgentKit:
# server.py
from redis_agent_kit import AgentKit
from redis_agent_kit.api import create_app
from agent import my_agent
kit = AgentKit(agent_callable=my_agent) # Uses RAK_REDIS_URL or localhost:6379
app = create_app(kit=kit)
3. Run worker + server:
rak worker --name my_agent --tasks agent:my_agent # Terminal 1
uvicorn server:app # Terminal 2
4. Invoke your agent:
curl -X POST http://localhost:8000/tasks \
-H "Content-Type: application/json" \
-d '{"message": "What is Redis?"}'
# Returns: {"task_id": "...", "thread_id": "...", "status": "queued"}
curl http://localhost:8000/tasks/{task_id}
# Returns: {"status": "done", "result": {"answer": "..."}}
How It Works
Your agent runs inside a task. Each task is one invocation of your agent with:
- Status tracking — queued → in_progress → done/failed
- Progress updates — emit messages as work happens
- Result/error storage — persist outcomes in Redis
- Conversation context — tasks belong to threads for multi-turn chat
Memory
RAK provides conversation history and long-term memory:
async def my_agent(ctx):
# Add messages to conversation history
await ctx.memory.add_message("user", ctx.message)
# Get recent conversation
messages = await ctx.memory.get_messages(limit=10)
# Search long-term memories
relevant = await ctx.memory.search("user preferences")
# Explicitly store important information
await ctx.memory.create_memory("User prefers dark mode")
response = generate_response(messages, relevant)
await ctx.memory.add_message("assistant", response)
return {"response": response}
Memory is enabled by default. Disable with RAK_MEMORY__ENABLED=false. Results support multiple formats: messages.markdown(), messages.json(), or messages.dict().
Real-time streaming
Push task progress and LLM tokens to clients over Server-Sent Events, backed by Redis Pub/Sub:
from redis_agent_kit import AgentKit, StreamConfig
from redis_agent_kit.api import create_app
stream_config = StreamConfig(enabled=True)
kit = AgentKit(agent_callable=my_agent, stream_config=stream_config)
app = create_app(kit=kit, stream_config=stream_config)
const es = new EventSource(`/tasks/${taskId}/stream`);
es.addEventListener('token', (e) => process.stdout.write(JSON.parse(e.data).message));
es.addEventListener('done', (e) => { console.log(JSON.parse(e.data).result); es.close(); });
Supports per-task, per-session, and global channel scopes. See the Streaming guide for token streaming, event filtering, and replay.
Documentation
- Tutorial — Build a complete agent from scratch
- Tasks — Status, progress, results
- Threads — Conversation management
- Memory — Working and long-term memory
- Streaming — Real-time SSE and token streaming
- Middleware — RAG, thread history, auto-emit
- Protocols — A2A, ACP, MCP exposure
- Input Handling — Pause for user input
- Pipelines — Ingest and vectorize content
- CLI | API — Reference
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file redis_agent_kit-0.1.0.tar.gz.
File metadata
- Download URL: redis_agent_kit-0.1.0.tar.gz
- Upload date:
- Size: 467.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
30fbd480f0ca73a09a1770f6cf007b190e8b108f17a0cacd847bfd56ddc12bdc
|
|
| MD5 |
98abe956b58bcdadede7b7adeecc6e81
|
|
| BLAKE2b-256 |
8052b02f07747842b615d4cf74801f3c30a9c5653d098967342d07b1b97ae8ed
|
File details
Details for the file redis_agent_kit-0.1.0-py3-none-any.whl.
File metadata
- Download URL: redis_agent_kit-0.1.0-py3-none-any.whl
- Upload date:
- Size: 121.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0771d52f8876ad7d982c77813939f87b6abb203cdc4edce38ef83cb431244368
|
|
| MD5 |
240fcf2d9dedac6e4057b538eb71d05f
|
|
| BLAKE2b-256 |
a90fe6acdd09ed2299590e3598efcf1996f927fffa660150882243f255a4ad23
|