A Python SDK for building AI agents with LLM integration
Project description
zai-adk-python
English | 简体中文
A Python SDK for building AI agents with LLM integration. Provides a unified interface for multiple LLM providers (OpenAI, Anthropic) with native tool calling, sandboxed execution, skills system, and observability.
Features
- Unified LLM Interface — Single API for OpenAI and Anthropic with provider-specific optimizations
- Tool System — FastAPI-style dependency injection with automatic JSON schema generation
- Sandboxed Execution — Secure filesystem and command isolation for agent tools
- Skills System — Reusable prompt instructions with hot-reload support
- MCP Integration — First-class support for Model Context Protocol servers
- Event Streaming — Real-time agent events (thinking, tool calls, responses)
- Token Tracking — Built-in cost calculation and usage monitoring
- Observability — Laminar integration for tracing and debugging
Requirements
- Python >= 3.11
- uv for dependency management
Installation
# Clone the repository
git clone <repo-url>
cd zai-adk-python
# Install dependencies
uv sync --group dev
Configuration
Create a .env file in the project root:
# Anthropic
ANTHROPIC_BASE_URL=https://open.bigmodel.cn/api/anthropic
ANTHROPIC_API_KEY=your-api-key
ANTHROPIC_MODEL=glm-4.7
# OpenAI (optional)
OPENAI_BASE_URL=https://open.bigmodel.cn/api/paas/v4/
OPENAI_API_KEY=your-api-key
OPENAI_MODEL=glm-4.7
Quick Start
Basic Agent
import asyncio
from zai_adk import Agent
from zai_adk.llm.anthropic.chat import ChatAnthropic
from zai_adk.tools import tool
@tool("Calculate the sum of two numbers")
async def add(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
async def main():
agent = Agent(
llm=ChatAnthropic(model="claude-sonnet-4-5-20250929"),
tools=[add],
)
response = await agent.query("What is 25 + 17?")
print(response)
asyncio.run(main())
Streaming Events
from zai_adk.agent import ToolCallEvent, ToolResultEvent, FinalResponseEvent
async for event in agent.query_stream("Help me with a task"):
match event:
case ToolCallEvent(tool=name, args=args):
print(f"[Calling {name}]")
case ToolResultEvent(tool=name, result=result):
print(f"[Result from {name}]: {result}")
case FinalResponseEvent(content=text):
print(f"[Final]: {text}")
Sandboxed File Operations
from pathlib import Path
from zai_adk.sandbox import SandboxConfig
from zai_adk.tools.builtin.sandbox import bash, fs_read, fs_write
agent = Agent(
llm=ChatAnthropic(model="claude-sonnet-4-5-20250929"),
tools=[bash, fs_read, fs_write],
sandbox=SandboxConfig(
kind="local",
work_dir="./workspace",
enforce_boundary=True,
),
)
Skills System
from zai_adk.skills import Skill, SkillsManager
from zai_adk.tools.builtin import skills
skills_manager = SkillsManager()
skills_manager.register(
Skill(
name="code-review",
description="Review code for bugs and security issues",
instructions="""You are a code reviewer...""",
)
)
agent = Agent(
llm=ChatAnthropic(model="claude-sonnet-4-5-20250929"),
tools=[skills],
skills_manager=skills_manager,
)
Or use filesystem-based skills with auto-discovery:
agent = Agent(
llm=ChatAnthropic(model="claude-sonnet-4-5-20250929"),
tools=[skills],
skills_dir="./skills", # Scans for SKILL.md files
)
Tool Creation
Use the @tool decorator with type hints for automatic schema generation:
from typing import Annotated
from zai_adk.tools import tool, Depends
# Simple tool
@tool("Get current weather")
async def get_weather(location: str, unit: str = "celsius") -> str:
"""Get the weather for a location."""
return f"Weather in {location}: 22° {unit}"
# With dependency injection
@tool("Search the database")
async def search_db(
query: str,
db_conn: Annotated[DbConnection, Depends(get_db)],
) -> list[dict]:
"""Search the database for matching records."""
return await db_conn.execute(query)
Dependency Injection
from zai_adk.sandbox import get_sandbox
from zai_adk.sandbox.base import Sandbox
@tool("Execute a shell command")
async def bash(
command: str,
sandbox: Annotated[Sandbox, Depends(get_sandbox)],
) -> str:
"""Run a command in the sandboxed environment."""
result = await sandbox.exec(command)
return result.stdout
Ephemeral Tools
Keep only the last N tool outputs in message history:
@tool("Read a file", ephemeral=1)
async def read_file(path: str) -> str:
"""Only the most recent result is kept in context."""
return Path(path).read_text()
Agent Modes
CLI Mode (Default)
Agent stops automatically when the LLM returns text without tool calls:
agent = Agent(llm=..., tools=[...], mode="cli")
response = await agent.query("List all Python files")
Autonomous Mode
Agent requires an explicit done tool call to signal completion:
@tool("Signal task completion")
async def done(message: str) -> str:
return f"TASK COMPLETE: {message}"
agent = Agent(llm=..., tools=[..., done], mode="autonomous")
MCP Integration
from zai_adk.mcp import MCPServerConfig
agent = Agent(
llm=ChatAnthropic(model="claude-sonnet-4-5-20250929"),
mcp_servers=[
MCPServerConfig(
name="filesystem",
command="npx",
args=["-y", "@modelcontextprotocol/server-filesystem", "./workspace"],
),
],
)
Protocol Support
The SDK supports multiple output protocols through a composable interface:
AG-UI Protocol
from zai_adk import Agent
from zai_adk.protocols.agui import AGUIProtocol
agent = Agent(llm=llm, tools=[tools])
agui = AGUIProtocol(agent)
async for event in agui.query_stream("Hello"):
# Handle AG-UI events
...
SSE Streaming for HTTP
from fastapi import FastAPI
from fastapi.responses import StreamingResponse
from zai_adk import Agent
from zai_adk.protocols.agui import AGUIProtocol
app = FastAPI()
agent = Agent(llm=llm, tools=[tools])
agui = AGUIProtocol(agent)
@app.post("/chat")
async def chat(message: str):
async def stream():
async for sse in agui.query_stream_sse(message):
yield sse
return StreamingResponse(stream(), media_type="text/event-stream")
See docs/protocols.md for more details on implementing custom protocols.
Testing
# Run all tests
uv run pytest tests/ -v
# Skip tests requiring real LLM API calls
uv run pytest tests/ -v -m "not llm"
# Run specific test
uv run pytest tests/llm/test_anthropic_chat.py::test_anthropic_basic_chat -v
Examples
| Example | Description |
|---|---|
claude_code.py |
Claude Code-style tools with sandboxed filesystem |
skills_usage.py |
Skills system patterns and hot-reload |
mcp_usage.py |
Model Context Protocol integration |
todo_usage.py |
Todo list management example |
Run an example:
python -m examples.claude_code
Architecture
zai_adk/
├── agent/ # Agent loop with event streaming
├── llm/ # LLM provider abstraction (OpenAI, Anthropic)
├── tools/ # Tool system with dependency injection
├── sandbox/ # Secure execution environments
├── skills/ # Reusable prompt instructions
├── mcp/ # Model Context Protocol integration
├── protocols/ # Protocol converter base interface
│ └── agui/ # AG-UI protocol implementation
├── tokens/ # Usage tracking and cost calculation
└── observability/ # Laminar tracing integration
License
MIT License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file zai_adk_python_preview-0.1.1.tar.gz.
File metadata
- Download URL: zai_adk_python_preview-0.1.1.tar.gz
- Upload date:
- Size: 391.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7cab0c6a5a2f7adeeb9c9fe0f6f397d93decf1a9a9624eb53a67ff387a440202
|
|
| MD5 |
0da60edc50c6cf04468483ff9141d225
|
|
| BLAKE2b-256 |
5426c255d7f331e5001e243b489cccf3c533c393ab10496804f16cf5e3a7da66
|
File details
Details for the file zai_adk_python_preview-0.1.1-py3-none-any.whl.
File metadata
- Download URL: zai_adk_python_preview-0.1.1-py3-none-any.whl
- Upload date:
- Size: 100.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3f4b2b83ece19c2ed53623b16a9aabee038043a59010d2ef94e16eb0185675e5
|
|
| MD5 |
f5d71dc2954afd7616d84b30864ea84f
|
|
| BLAKE2b-256 |
f358cc7d5c36bcd5f8614e73641904cfbc1cfcb6cf2db1d420e4167c15cf7442
|