Python LLM agent framework with multi-model support (OpenAI, Anthropic, Gemini, DeepSeek), tool calling, session management, A2A/MCP/AGUI protocols, and FastAPI server integration
Project description
tRPC-Agent-Python
A production-grade Agent framework deeply integrated with the Python AI ecosystem.
tRPC-Agent-Python provides an end-to-end foundation for agent building, orchestration, tool integration, session and long-term memory, service deployment, and observability, so you can ship reliable and extensible AI applications faster.
Why Choose tRPC-Agent-Python
- Multi-paradigm agent orchestration: Built-in orchestration supports
ChainAgent/ParallelAgent/CycleAgent/TransferAgent, withGraphAgentfor graph-based orchestration. - Graph orchestration capability (
GraphAgent): Use DSL to orchestrateAgent/Tool/MCP/Knowledge/CodeExecutorin one unified flow. - Efficient integration with Python AI ecosystems: Agent ecosystem extensions (
claude-agent-sdk/LangGraph, etc.) / Tool ecosystem extensions (mcp, etc.) / Knowledge ecosystem extensions (LangChain, etc.) / Model ecosystem extensions (LiteLLM, etc.) / Memory ecosystem extensions (Mem0, etc.). - Agent ecosystem extensions: Supports
LangGraphAgent/ClaudeAgent/TeamAgent(Agno-Like). - Tool ecosystem extensions:
FunctionTool/ File tools /MCPToolset/ LangChain Tool / Agent-as-Tool. - Complete memory capability (
Session/Memory):Sessionmanages messages and state within a single session, whileMemorymanages cross-session long-term memory and personalization. Persistence supportsInMemory/Redis/SQL;Memoryalso supportsMem0. - Production-grade knowledge capability: Built on LangChain components with first-class RAG support.
- CodeExecutor extension capability: Supports local / container executors for code execution and task grounding.
- Skills extension capability: Supports
SKILL.md-based skill systems for reusable capabilities and dynamic tooling. - Connect to multiple LLM providers: OpenAI-like / Anthropic / LiteLLM routing.
- Serving and observability: Expose HTTP / A2A / AG-UI services through FastAPI, with built-in OpenTelemetry tracing.
- trpc-claw (OpenClaw-like personal agent): Built on nanobot, tRPC-Agent ships trpc-claw so you can quickly build an OpenClaw-like personal AI agent with Telegram, WeCom, and other channel support.
Use Cases
- Intelligent customer support and knowledge QA (RAG + session memory)
- Code generation and engineering automation (
ClaudeAgent) - Code execution and automated task grounding (
CodeExecutor) - Agent Skills for reusable capabilities
- Multi-role collaborative workflows (
TeamAgent/ multi-agent) - Cross-protocol agent service integration (
A2A/AG-UI) - MCP tool protocol integration and tool ecosystem expansion
- Unified gateway access and protocol conversion
- Component-based workflow orchestration using
GraphAgent - Reusing existing LangGraph workflows in this runtime
- Build an OpenClaw-like personal AI agent quickly with trpc-claw
Table of Contents
- tRPC-Agent-Python
- Why Choose tRPC-Agent-Python
- Use Cases
- Table of Contents
- Quick Start
- trpc-claw Usage
- Documentation
- Examples
- 1. Getting Started and Basic Agents
- 2. Preset Multi-Agent Orchestration
- 3. Team Collaboration
- 4. Graph Orchestration
- 5. Agent Ecosystem Extensions
- 6. Tools and MCP
- 7. Skills
- 8. CodeExecutor
- 9. Session, Memory, and Knowledge
- 10. Serving and Protocols
- 11. Filters and Execution Control
- 12. Advanced LlmAgent Capabilities
- 13. LlmAgent Tool Calling and Interaction
- Architecture Overview
- Contributing
- Acknowledgements
Quick Start
Prerequisites
- Python 3.10+ (Python 3.12 recommended)
- Available model API key (OpenAI-like / Anthropic, or route via LiteLLM)
Installation
pip install trpc-agent-py
Install optional capabilities as needed:
pip3 install -e '.[a2a,ag-ui,knowledge,agent-claude,mem0,langfuse]'
Develop Weather Agent
import asyncio
import os
import uuid
from trpc_agent_sdk.agents import LlmAgent
from trpc_agent_sdk.models import OpenAIModel
from trpc_agent_sdk.runners import Runner
from trpc_agent_sdk.sessions import InMemorySessionService
from trpc_agent_sdk.tools import FunctionTool
from trpc_agent_sdk.types import Content, Part
async def get_weather_report(city: str) -> dict:
return {"city": city, "temperature": "25°C", "condition": "Sunny", "humidity": "60%"}
async def main():
model = OpenAIModel(
model_name=os.environ["TRPC_AGENT_MODEL_NAME"],
api_key=os.environ["TRPC_AGENT_API_KEY"],
base_url=os.environ.get("TRPC_AGENT_BASE_URL", ""),
)
agent = LlmAgent(
name="assistant",
description="A helpful assistant",
model=model,
instruction="You are a helpful assistant.",
tools=[FunctionTool(get_weather_report)],
)
session_service = InMemorySessionService()
runner = Runner(app_name="demo_app", agent=agent, session_service=session_service)
user_id = "demo_user"
session_id = str(uuid.uuid4())
user_content = Content(parts=[Part.from_text(text="What's the weather in Beijing?")])
async for event in runner.run_async(user_id=user_id, session_id=session_id, new_message=user_content):
if not event.content or not event.content.parts:
continue
for part in event.content.parts:
if part.text and event.partial:
print(part.text, end="", flush=True)
elif part.function_call:
print(f"\n🔧 [{part.function_call.name}({part.function_call.args})]", flush=True)
elif part.function_response:
print(f"📊 [{part.function_response.response}]", flush=True)
print()
if __name__ == "__main__":
asyncio.run(main())
Run the Agent
export TRPC_AGENT_API_KEY=xxx
export TRPC_AGENT_BASE_URL=xxxx
export TRPC_AGENT_MODEL_NAME=xxxx
python quickstart.py
trpc-claw Usage
tRPC-Agent ships trpc-claw (trpc_agent_cmd openclaw), built on nanobot, so you can quickly build an OpenClaw-like personal AI agent. Start it with a single command and it runs 24/7 — chat through Telegram, WeCom, or any other IM, or use it locally via CLI / UI.
For full configuration and advanced features, see: openclaw.md
Quick Start
1. Generate config
mkdir -p ~/.trpc_claw
trpc_agent_cmd openclaw conf_temp > ~/.trpc_claw/config.yaml
2. Set environment variables
export TRPC_AGENT_API_KEY=your_api_key
export TRPC_AGENT_BASE_URL=your_base_url
export TRPC_AGENT_MODEL_NAME=your_model
3. Run locally
# Force local CLI mode
trpc_agent_cmd openclaw chat -c ~/.trpc_claw/config.yaml
# Local UI
trpc_agent_cmd openclaw ui -c ~/.trpc_claw/config.yaml
4. Connect WeCom / Telegram
Enable the channel in config.yaml, then launch with run:
channels:
wecom:
enabled: true
bot_id: ${WECOM_BOT_ID}
secret: ${WECOM_BOT_SECRET}
# or Telegram:
# telegram:
# enabled: true
# token: ${TELEGRAM_BOT_TOKEN}
trpc_agent_cmd openclaw run -c ~/.trpc_claw/config.yaml
If no channel is available, run automatically falls back to local CLI for easy debugging.
Documentation
- See directory:
docs/mkdocs/en
Examples
All examples in the examples directory are runnable. The groups below organize recommended starting points by capability, with short guidance so you can quickly pick what to read first for your scenario.
1. Getting Started and Basic Agents
Recommended first:
- examples/quickstart - Minimal runnable demo
- examples/llmagent - Basic
LlmAgentusage - examples/litellm - LiteLLM backend routing with
LiteLLMModel - examples/llmagent_with_custom_prompt - Custom prompts
- examples/llmagent_with_schema - Structured outputs
Related docs: llm_agent.md / model.md
This group helps you:
- Run a full end-to-end path from user input to tool call to model output
- Understand how to consume
function_call/function_responseevents in streaming output - Learn baseline patterns for prompts and structured responses
Start with this snippet (Runner + streaming events):
runner = Runner(app_name=app_name, agent=root_agent, session_service=session_service)
async for event in runner.run_async(user_id=user_id, session_id=current_session_id, new_message=user_content):
if event.partial and event.content:
...
2. Preset Multi-Agent Orchestration
Recommended first:
- examples/multi_agent_chain -
ChainAgent - examples/multi_agent_parallel -
ParallelAgent - examples/multi_agent_cycle -
CycleAgent - examples/transfer_agent -
TransferAgenthandoff - examples/multi_agent_subagent - Sub-agent delegation
- examples/multi_agent_compose - Composed orchestration
- examples/multi_agent_start_from_last - Resume from last agent state
Related docs: multi_agents.md
This group helps you:
- Understand the role differences among Chain / Parallel / Cycle / Transfer
- Pick serial, parallel, loop, or handoff orchestration by task shape
- Learn how to resume and compose flows from existing outputs
Start with this snippet (ChainAgent):
pipeline = ChainAgent(
name="document_processor",
sub_agents=[extractor_agent, translator_agent],
)
3. Team Collaboration
Recommended first:
- examples/team - Team coordination mode
- examples/team_parallel_execution - Team parallel execution
- examples/team_with_skill - Team + Skills
- examples/team_human_in_the_loop - Team with human-in-the-loop
- examples/team_as_sub_agent - Team as a sub-agent
- examples/team_member_message_filter - Team member message filtering
- examples/team_member_agent_claude - Team member using
ClaudeAgent - examples/team_member_agent_langgraph - Team member using
LangGraphAgent - examples/team_member_agent_team - Nested Team members
- examples/team_with_cancel - Team task cancellation
Related docs: team.md / human_in_the_loop.md / cancel.md
This group helps you:
- Understand the Leader / Member collaboration model in Team
- Combine Skills, sub-teams, and external agents in one workflow
- Cover practical concerns like filtering, human approval, and cancellation
Start with this snippet (TeamAgent):
content_team = TeamAgent(
name="content_team",
model=model,
members=[researcher, writer],
instruction=LEADER_INSTRUCTION,
share_member_interactions=True,
)
4. Graph Orchestration
Recommended first:
- examples/graph -
GraphAgentwith function / llm / agent / code / mcp / knowledge nodes - examples/graph_multi_turns - Multi-turn graph execution
- examples/graph_with_interrupt - Graph execution interruption
- examples/dsl - DSL orchestration basics
- examples/dsl/classifier_mcp - DSL + MCP classification routing
Related docs: graph.md / dsl.md
This group helps you:
- Build explicit, controllable workflows (branching, merging, interruption, resuming)
- Mix
Agent/Tool/MCP/CodeExecutor/Knowledgein a single graph - Use DSL for workflows that stay readable and maintainable
Start with this snippet (conditional routing):
graph.add_conditional_edges(
"decide",
create_route_choice(set(path_map.keys())),
path_map,
)
5. Agent Ecosystem Extensions
Recommended first:
- examples/langgraph_agent - Integrate pre-built and compiled LangGraph workflows
- examples/langgraph_agent_with_cancel -
LangGraphAgentcancellation - examples/langgraphagent_with_human_in_the_loop -
LangGraphAgenthuman-in-the-loop - examples/claude_agent -
ClaudeAgentbasics - examples/claude_agent_with_streaming_tool -
ClaudeAgentstreaming tools - examples/claude_agent_with_skills -
ClaudeAgent+ Skills - examples/claude_agent_with_code_writer -
ClaudeAgentfor code generation - examples/claude_agent_with_travel_planner -
ClaudeAgenttask planning - examples/claude_agent_with_cancel -
ClaudeAgentcancellation
Related docs: langgraph_agent.md / claude_agent.md / human_in_the_loop.md / cancel.md
This group helps you:
- Reuse existing LangGraph assets in the current runtime with
LangGraphAgent - Use
ClaudeAgentfor code generation, engineering automation, and streaming tools - Cover production-ready patterns like human-in-the-loop and cancellation
Start with this snippet (ClaudeAgent):
root_agent = ClaudeAgent(
name="claude_weather_agent",
model=_create_model(),
instruction=INSTRUCTION,
tools=[FunctionTool(get_weather)],
enable_session=True,
)
6. Tools and MCP
Recommended first:
- examples/function_tools -
FunctionTool - examples/file_tools - File tools
- examples/tools - Basic tool combinations
- examples/toolsets - ToolSet composition
- examples/streaming_tools - Streaming tool calling
- examples/mcp_tools -
MCPToolset(stdio/sse/streamable-http) - examples/langchain_tools - LangChain tools integration
- examples/agent_tools - Agent as a Tool
Related docs: tool.md
This group helps you:
- Cover the full tool access path from function tools to MCP to composed toolsets
- Learn advanced modes such as streaming tools and Agent-as-Tool
- Reuse existing tool implementations in multi-agent scenarios
Start with this snippet (MCPToolset):
class StdioMCPToolset(MCPToolset):
def __init__(self):
super().__init__()
self._connection_params = StdioConnectionParams(
server_params=McpStdioServerParameters(command="python3", args=["mcp_server.py"]),
timeout=5,
)
7. Skills
Recommended first:
- examples/skills -
SkillToolSetbasics - examples/skills_with_container - Skills in containers
- examples/skills_with_dynamic_tools - Dynamic tool skills
Related docs: skill.md
This group helps you:
- Package reusable capabilities into Skills
- Support scenario-based dynamic tool composition
- Build reusable business skill modules
Start with this snippet (SkillToolSet):
workspace_runtime = create_local_workspace_runtime()
repository = create_default_skill_repository(skill_paths, workspace_runtime=workspace_runtime)
skill_tool_set = SkillToolSet(repository=repository, run_tool_kwargs=tool_kwargs)
8. CodeExecutor
Recommended first:
- examples/code_executors -
UnsafeLocalCodeExecutor/ContainerCodeExecutor
Related docs: code_executor.md
This group helps you:
- Choose local or containerized executors by runtime constraints
- Let agents execute code and ground tasks within controlled boundaries
- Combine with Skills/Tools for planning-and-execution loops
9. Session, Memory, and Knowledge
Recommended first:
- Session: examples/session_service_with_in_memory / examples/session_service_with_redis / examples/session_service_with_sql / examples/session_summarizer / examples/session_state
- Memory: examples/memory_service_with_in_memory / examples/memory_service_with_redis / examples/memory_service_with_sql / examples/memory_service_with_mem0 / examples/mem_0
- Knowledge: examples/knowledge_with_documentloader / examples/knowledge_with_vectorstore / examples/knowledge_with_rag_agent / examples/knowledge_with_searchtool_rag_agent / examples/knowledge_with_prompt_template / examples/knowledge_with_custom_components
Related docs:
- Session: session.md / session_redis.md / session_sql.md / session_summary.md
- Memory: memory.md
- Knowledge: knowledge.md / knowledge_document_loader.md / knowledge_retrievers.md / knowledge_vectorstore.md / knowledge_prompt_template.md / knowledge_custom_components.md
This group helps you:
- Session: manage per-session messages, summaries, and state
- Memory: manage cross-session long-term memory (including Mem0)
- Knowledge: cover document loading, retrieval, RAG, and prompt templates
10. Serving and Protocols
Recommended first:
- examples/fastapi_server - HTTP service (sync + SSE)
- examples/a2a / examples/a2a_with_cancel - A2A service and cancellation
- examples/agui / examples/agui_with_cancel - AG-UI service and cancellation
Related docs: a2a.md / agui.md / cancel.md
This group helps you:
- Expose services through HTTP / A2A / AG-UI
- Integrate streaming responses and cancellation into real applications
- Use minimal templates for production service rollout
11. Filters and Execution Control
Recommended first:
- examples/filter_with_model - Model-level filters
- examples/filter_with_tool - Tool-level filters
- examples/filter_with_agent - Agent-level filters
- examples/llmagent_with_branch_filtering - Branch filtering
- examples/llmagent_with_timeline_filtering - Timeline filtering
- examples/llmagent_with_cancel - Execution cancellation
Related docs: filter.md / cancel.md
This group helps you:
- Apply control policies at model, tool, and agent layers
- Cover branch filtering, timeline filtering, and cancellation
- Build strong governance and risk-control constraints
12. Advanced LlmAgent Capabilities
Recommended first:
- examples/llmagent_with_tool_prompt - Tool-call prompt enhancement
- examples/llmagent_with_thinking - Thinking mode
- examples/llmagent_with_user_history - User history management
- examples/llmagent_with_max_history_messages - History window limits
- examples/llmagent_with_model_create_fn - Dynamic model factory
- examples/llmagent_with_custom_agent - Custom agent extension
Related docs: llm_agent.md / model.md / custom_agent.md
This group helps you:
- Focus on
LlmAgentextension points for context, prompting, and model routing - Adapt a general-purpose agent to domain-specific business policies
- Build reusable behavior templates for repeated scenarios
13. LlmAgent Tool Calling and Interaction
Recommended first:
- examples/llmagent_with_streaming_tool_simple - Simple streaming tool calls
- examples/llmagent_with_streaming_tool_complex - Complex streaming tool calls
- examples/llmagent_with_parallal_tools - Parallel tool calling (directory name intentionally uses
parallal) - examples/llmagent_with_human_in_the_loop - Human-in-the-loop decisions
Related docs: llm_agent.md / tool.md / human_in_the_loop.md
This group helps you:
- Cover both simple and complex streaming tool interaction patterns
- Orchestrate parallel tool calls with human confirmation nodes
- Combine with filters and cancellation for more reliable execution chains
For more examples, see each subdirectory README.md under examples.
Architecture Overview
The framework is organized in an event-driven architecture where each layer can evolve independently:
- Agent layer: LlmAgent / ChainAgent / ParallelAgent / CycleAgent / TransferAgent
- Agent ecosystem extension layer: LangGraphAgent / ClaudeAgent / TeamAgent
- Graph capability layer: GraphAgent / trpc_agent_sdk.dsl.graph (DSL-based orchestration)
- Runner layer: Unified execution entry, coordinating Session / Memory / Artifact services
- Tool layer: FunctionTool / file tools / MCPToolset / Skill tools
- Model layer: OpenAIModel / AnthropicModel / LiteLLMModel
- Memory layer: SessionService / MemoryService / SessionSummarizer / Mem0MemoryService
- Knowledge layer: Production-grade LangChain-based knowledge and RAG capability
- Execution and skill layer: CodeExecutor (local / container) / Skills
- Service layer: FastAPI / A2A / AG-UI
- Observability layer: OpenTelemetry tracing/metrics, integrable with platforms like Langfuse
- Ecosystem adapter layer: claude-agent-sdk / mcp / LangChain / LiteLLM / Mem0 plugged into the main chain through model/tool/memory adapters
Key packages:
| Package | Responsibility |
|---|---|
| trpc_agent_sdk.agents | Agent abstractions, multi-agent orchestration, ecosystem extensions (LangGraphAgent / ClaudeAgent / TeamAgent) |
| trpc_agent_sdk.runners | Unified execution and event output |
| trpc_agent_sdk.models | Model adapter layer |
| trpc_agent_sdk.tools | Tooling system and MCP support |
| trpc_agent_sdk.sessions | Session management and summarization |
| trpc_agent_sdk.memory | Long-term memory services |
| trpc_agent_sdk.dsl.graph | DSL graph orchestration engine |
| trpc_agent_sdk.teams | Team collaboration mode |
| trpc_agent_sdk.code_executors | Code execution and workspace runtime |
| trpc_agent_sdk.skills | Skill repository and Skill tools |
| trpc_agent_sdk.server | FastAPI / A2A / AG-UI serving capabilities |
Contributing
We love contributions! Join our growing developer community and help build the future of AI Agents.
Ways to Contribute
- Report bugs or suggest new features through Issues
- Improve documentation to help others onboard faster
- Submit PRs for bug fixes, new features, or examples
- Share your use cases to inspire other builders
Quick Contribution Setup
# Fork and clone the repository
git clone https://github.com/YOUR_USERNAME/trpc-agent-python.git
cd trpc-agent-python
# Install development dependencies and run tests
pip install -e ".[dev]"
pytest
# Make your changes and open a PR!
Please read CONTRIBUTING.md for detailed guidelines and coding standards.
Please follow CODE-OF-CONDUCT.md to keep our community friendly, respectful, and inclusive.
Acknowledgements
Enterprise Validation
We sincerely thank Tencent Licaitong, Tencent Ads, and other business teams for continuous validation and feedback in real production scenarios, which helps us keep improving the framework.
Open-source Inspiration
We are also inspired by outstanding open-source frameworks including ADK, Agno, CrewAI, and AutoGen. We keep moving forward on the shoulders of giants.
If this project helps you, a GitHub Star is always appreciated — it's the most direct encouragement and helps more developers discover this project.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file trpc_agent_py-1.1.0.tar.gz.
File metadata
- Download URL: trpc_agent_py-1.1.0.tar.gz
- Upload date:
- Size: 7.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
48d253c9e2501071e790f217de1be5abefec7e0270e52e171c9438e806398eb1
|
|
| MD5 |
3a0ccbab09a0e87e36e7703a32202365
|
|
| BLAKE2b-256 |
a1c55cb4195f0343cca0d9d81d27a8ca796f38625e12589bb5da75382686f3c0
|
File details
Details for the file trpc_agent_py-1.1.0-py3-none-any.whl.
File metadata
- Download URL: trpc_agent_py-1.1.0-py3-none-any.whl
- Upload date:
- Size: 2.7 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
534124e08bb6cf5b34063a899d48207e242ccd12ee1f25c44926ff585f62ff9d
|
|
| MD5 |
20b0d1d8a78db90d1f506e87ba6414e4
|
|
| BLAKE2b-256 |
9b197c4ec2e2278a126b5fee621a52d0ab0702435e5618a661403f6c65176e4c
|