Production-ready multi-channel AI agent framework built on langchain 1.0, langgraph, and deepagents
Project description
Langclaw
Multi-channel AI agent framework, the LangChain way
Repository: github.com/tisu19021997/langclaw
Langclaw is a Python framework for building production-grade, multi-channel AI agent systems — with RBAC, scheduled tasks, persistent memory, subagent delegation, and a pluggable tool ecosystem — on top of LangChain, LangGraph, and deepagents.
FastAPI gave web developers a declarative, decorator-driven way to build APIs. Langclaw brings that same feeling to multi-channel agentic systems. Define tools, roles, subagents, and channels on a single app object — langclaw handles the wiring, middleware, message routing, and state persistence so you can focus on what your agent actually does.
Why Use Langclaw
- Framework, not a fork:
pip install langclawand build on top of it — like Flask/FastAPI for agentic systems. No repo cloning, no boilerplate. - Multi-channel from day one: Telegram, Discord, WebSocket out of the box. Add custom channels with a single
app.add_channel()call. - Declarative RBAC:
app.role("analyst", tools=["*"])— one line to define who can use what. Permissions are enforced as middleware before the LLM sees anything. - Subagent delegation: Register specialist subagents that run in isolated contexts. The main agent delegates via a built-in
tasktool; results flow back cleanly or stream directly to the channel. - Scheduled jobs: Users can ask the agent to schedule recurring tasks. Cron jobs publish to the same message bus and flow through the same pipeline as user messages.
- Pluggable everything: Message bus (asyncio / RabbitMQ / Kafka), checkpointer (SQLite / Postgres), LLM providers — swap backends via config, not code changes.
- Middleware pipeline: Content filtering, PII redaction, rate limiting, and RBAC run as composable middleware before every LLM call.
- Built on LangChain + LangGraph: Not a wrapper — langclaw compiles down to a real LangGraph
CompiledStateGraph. Bring any LangChain tool, model, or integration.
Hello World
from langclaw import Langclaw
app = Langclaw()
@app.tool()
async def greet(name: str) -> str:
"""Say hello to someone."""
return f"Hello, {name}!"
if __name__ == "__main__":
app.run()
That's it. Langclaw wires up the message bus, checkpointer, channels (from your .env), and middleware — then starts listening.
Real-World Example
Here's a research assistant with custom tools, subagent delegation, RBAC, lifecycle hooks, and a slash command — all on one app object:
from langclaw import Langclaw
from langclaw.gateway.commands import CommandContext
app = Langclaw(
system_prompt=(
"## Research Assistant\n"
"You are a financial research assistant.\n"
"Check stock prices before answering. For complex questions, "
"delegate to the deep-researcher subagent."
),
)
# -- Custom tool: stock price lookup ------------------------------------------
@app.tool()
async def get_stock_price(ticker: str) -> dict:
"""Fetch the latest quote for a US stock ticker."""
... # httpx call to Yahoo Finance
return {"ticker": ticker, "price": 182.52, "change_pct": "+1.23%"}
# -- Subagent: deep research in isolated context ------------------------------
app.subagent(
"deep-researcher",
description="Multi-step research using web search and synthesis",
system_prompt="You are a thorough researcher. Search, synthesise, cite.",
tools=["web_search", "web_fetch"],
output="channel", # stream results directly to the user
)
# -- RBAC: who can use what ---------------------------------------------------
app.role("analyst", tools=["*"])
app.role("free", tools=["web_search"])
# -- Command: bypasses the LLM entirely --------------------------------------
@app.command("watchlist", description="show watchlist prices (no AI)")
async def watchlist_cmd(ctx: CommandContext) -> str:
return "AAPL: $182.52 | MSFT: $441.20 | NVDA: $135.80"
# -- Lifecycle hooks ----------------------------------------------------------
@app.on_startup
async def setup():
... # open DB connections, HTTP clients, etc.
@app.on_shutdown
async def teardown():
... # clean up resources
if __name__ == "__main__":
app.run()
See examples/ for complete, runnable versions.
Message Flow
Every message — whether from a user or a cron job — follows the same path:
Channel (Telegram / Discord / WebSocket)
│
├── /command ──▶ CommandRouter ──▶ instant response (no LLM)
│
└── message ──▶ InboundMessage ──▶ Message Bus
│
GatewayManager
│
SessionManager ──▶ Checkpointer
│
Middleware Pipeline
(RBAC → Rate Limit → Content Filter → PII)
│
LangGraph Agent ──▶ Tools / Subagents
│
OutboundMessage ──▶ Channel
Cron jobs publish InboundMessage to the same bus, flowing through the identical pipeline. Commands bypass everything — they're fast system operations handled before the bus.
Installation
pip install langclaw
With channel and backend extras:
pip install "langclaw[telegram,postgres,rabbitmq]"
# Or install everything:
pip install "langclaw[all]"
Available extras: telegram, discord, websocket, postgres, rabbitmq, kafka, mcp, search, gmail.
Quick Start
-
Install the framework and a channel plugin (e.g., Telegram):
pip install "langclaw[telegram]"
-
Set your environment variables in a
.envfile:LANGCLAW__PROVIDERS__OPENAI__API_KEY=sk-... LANGCLAW__CHANNELS__TELEGRAM__BOT_TOKEN=123456:ABC-DEF...
-
Create your app (
app.py):from langclaw import Langclaw app = Langclaw( system_prompt="You are a friendly assistant. Keep answers short and helpful." ) @app.tool() async def reverse_text(text: str) -> str: """Reverse the given text. Useful for word games and puzzles.""" return text[::-1] if __name__ == "__main__": app.run()
-
Run it:
python app.py
Packages
| Package | Purpose |
|---|---|
app.py |
Langclaw class — the developer's primary interface (decorators, lifecycle, wiring) |
agents/ |
LangGraph agent construction, tool wiring, subagent delegation |
gateway/ |
Channel orchestration (GatewayManager), command routing, message dispatch |
bus/ |
Message bus abstraction — asyncio (dev), RabbitMQ / Kafka (prod) |
middleware/ |
Request pipeline: RBAC, rate limit, content filter, PII redaction |
config/ |
Pydantic Settings with LANGCLAW__ env prefix (nested __ delimiter) |
cron/ |
Scheduled jobs via APScheduler v4 |
session/ |
Maps (channel, user, context) to LangGraph thread IDs |
checkpointer/ |
Conversation state persistence — SQLite (dev), Postgres (prod) |
providers/ |
LLM model resolution via init_chat_model |
cli/ |
Typer CLI: langclaw init, langclaw gateway, langclaw agent, langclaw cron, langclaw status |
Roadmap
Shipped
- Subagent delegation —
app.subagent()registers child agents with isolated context and per-subagent model/tool sets - Channel-routed subagents — subagents can publish results directly to the originating channel (
output="channel") - Guardrails middleware —
ContentFilterMiddleware(keyword/regex) andPIIMiddleware(redaction) in the built-in stack - Heartbeat / proactive wake-up — event-driven condition checks that fire messages through the agent pipeline
Planned
- Multi-agent routing — named agents with distinct models, routed by channel or user intent
- More channels — Slack, WhatsApp, REST API gateway
- Plugin ecosystem —
langclaw-*tool packs installable via pip - Observability — OpenTelemetry tracing for the full message flow
- Test coverage — comprehensive tests across all modules
Contributing
git clone https://github.com/tisu19021997/langclaw.git
cd langclaw
uv sync --group dev
uv run pytest tests/ -v
uv run ruff check . && uv run ruff format .
License
MIT — see LICENSE for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langclaw-0.1.0.tar.gz.
File metadata
- Download URL: langclaw-0.1.0.tar.gz
- Upload date:
- Size: 355.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
758b9b67d626ab141f918d90ac31552ff38d34d007170bee420745a663c9643e
|
|
| MD5 |
e7fbd6efbc318e3c821bac839dd81659
|
|
| BLAKE2b-256 |
f5867789d24822bcc55ed260e4553708516e9fabcf753fc05cad05acd12118b0
|
File details
Details for the file langclaw-0.1.0-py3-none-any.whl.
File metadata
- Download URL: langclaw-0.1.0-py3-none-any.whl
- Upload date:
- Size: 113.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f52d82ad158cedb9fd907c8d59f0462b256965abd2c7b06f5c00445c2627af34
|
|
| MD5 |
5f93d72b1ace5a2f1589b4957cd5ffcc
|
|
| BLAKE2b-256 |
e217c1687f2aa4fef725405f88885f51b4e47ee6c0863b093a01707b833c3541
|