极简的uni-im平台智能体开发框架
Project description
uni-agent-sdk
智能体开发SDK - 极简的uni-im平台智能体开发框架
让开发者只需3行代码就能创建强大的智能体,自动处理消息接收、LLM推理、响应发送等复杂流程。
🚀 特性
- 极简API: 只需3行代码创建智能体
- 无状态架构: 基于RabbitMQ的无状态消费者模式
- 开箱即用: 内置LLM集成、平台通信、文件处理
- 类型安全: 完整的类型注解和Pydantic验证
- 高性能: 异步处理,支持并发消息处理
- 易于扩展: 支持函数调用、上下文记忆等高级功能
📦 安装
pip install uni-agent-sdk
⚡ 快速开始
最简单的智能体
import asyncio
from uni_agent_sdk import Agent, Message, Response
class SimpleAgent(Agent):
async def handle_message(self, message: Message, context: dict) -> Response:
return Response.text(f"你好!我收到了:{message.content}")
# 启动智能体
agent = SimpleAgent(
api_key="your_api_key",
api_secret="your_api_secret",
openrouter_api_key="your_openrouter_key"
)
asyncio.run(agent.start())
AI智能体(使用LLM)
class AIAgent(Agent):
async def handle_message(self, message: Message, context: dict) -> Response:
# 直接使用内置的LLM服务
reply = await self.llm.chat(
message.content,
system_prompt="你是一个友善的AI助手"
)
return Response.text(reply)
上下文感知智能体
class ContextualAgent(Agent):
async def handle_message(self, message: Message, context: dict) -> Response:
# 获取历史消息
history = context.get('messages', [])
# 构建对话历史
conversation = []
for hist_msg in history[-5:]: # 最近5条消息
conversation.append({
"role": "user" if hist_msg['from_uid'] == message.from_uid else "assistant",
"content": hist_msg['body']
})
# 添加当前消息
conversation.append({"role": "user", "content": message.content})
# 生成回复
reply = await self.llm.chat(conversation)
return Response.text(reply)
🔧 配置
环境变量配置
# 必需配置
export OPENROUTER_API_KEY="sk-or-v1-xxx"
export RABBITMQ_HOST="your-rabbitmq-host"
export RABBITMQ_USER="your-username"
export RABBITMQ_PASSWORD="your-password"
# 可选配置
export PLATFORM_BASE_URL="https://uni-im.dcloud.net.cn"
export DEFAULT_MODEL="openai/gpt-3.5-turbo"
export LOG_LEVEL="INFO"
代码配置
agent = MyAgent(
api_key="robot_xxx",
api_secret="xxx",
# OpenRouter配置
openrouter_api_key="sk-or-v1-xxx",
# RabbitMQ配置
rabbitmq_host="localhost",
rabbitmq_port=5673,
rabbitmq_user="guest",
rabbitmq_password="guest",
# LLM配置
default_model="openai/gpt-3.5-turbo",
default_temperature=0.7,
# 其他配置
log_level="INFO"
)
📚 示例
查看 examples/ 目录获取更多示例:
- simple_agent.py: 最简单的回声智能体
- ai_agent.py: 使用LLM的AI智能体
- contextual_agent.py: 支持上下文记忆的智能体
- function_agent.py: 支持函数调用的智能体
🏗️ 架构
用户消息 → uni-im平台 → RabbitMQ → SDK消费者 → 开发者智能体
↓
用户收到回复 ← uni-im平台 ← HTTP API ← SDK响应 ← LLM推理
核心组件
- Agent: 智能体基类,处理消息接收和响应
- LLMService: LLM服务,支持多种模型
- PlatformAPI: 平台通信服务
- FileService: 文件上传下载服务
- Config: 配置管理
🧪 测试
# 运行所有测试
python run_tests.py
# 运行特定测试
python -m unittest tests.test_crypto
# 运行集成测试
python -m pytest tests/test_integration.py -v
📖 API文档
Agent类
class Agent(ABC):
"""智能体抽象基类"""
@abstractmethod
async def handle_message(self, message: Message, context: dict) -> Response:
"""处理消息的抽象方法"""
pass
async def start(self):
"""启动智能体"""
pass
async def stop(self):
"""停止智能体"""
pass
消息类型
class Message(BaseModel):
"""接收消息模型"""
id: str
from_uid: str
to_uid: str
conversation_id: str
content: str
message_type: str = "text"
create_time: int
class Response(BaseModel):
"""响应消息模型"""
content: str
response_type: str = "text"
metadata: Dict[str, Any] = {}
@classmethod
def text(cls, content: str) -> "Response":
"""创建文本响应"""
@classmethod
def image(cls, image_url: str, caption: str = "") -> "Response":
"""创建图片响应"""
@classmethod
def file(cls, file_url: str, filename: str) -> "Response":
"""创建文件响应"""
LLM服务
class LLMService:
"""LLM推理服务"""
async def chat(self, messages, model=None, temperature=None) -> str:
"""聊天接口"""
async def chat_stream(self, messages, model=None) -> AsyncIterator[str]:
"""流式聊天接口"""
async def function_call(self, messages, functions) -> Dict:
"""函数调用接口"""
🔒 安全
- 使用HMAC-SHA256签名验证API请求
- 支持签名过期时间设置
- 敏感信息自动脱敏显示
- 配置验证和类型检查
🚀 部署
单进程部署
# main.py
import asyncio
from my_agent import MyAgent
async def main():
agent = MyAgent(
api_key="xxx",
api_secret="xxx",
openrouter_api_key="xxx"
)
await agent.start()
if __name__ == "__main__":
asyncio.run(main())
Docker部署
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "main.py"]
多实例负载均衡
同一智能体可以运行多个进程实例,RabbitMQ会自动进行负载均衡。
🤝 贡献
欢迎提交Issue和Pull Request!
- Fork 项目
- 创建特性分支 (
git checkout -b feature/AmazingFeature) - 提交更改 (
git commit -m 'Add AmazingFeature') - 推送到分支 (
git push origin feature/AmazingFeature) - 开启Pull Request
📄 许可证
本项目采用 MIT 许可证 - 查看 LICENSE 文件了解详情。
🆘 支持
- 📧 邮箱: support@uni-im.com
- 💬 QQ群: 123456789
- 📖 文档: https://uni-im.dcloud.net.cn/docs
- 🐛 问题反馈: https://github.com/uni-im/uni-agent-sdk/issues
🏆 致谢
感谢以下开源项目:
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
uni_agent_sdk-0.2.1.tar.gz
(127.5 kB
view details)
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
uni_agent_sdk-0.2.1-py3-none-any.whl
(105.7 kB
view details)
File details
Details for the file uni_agent_sdk-0.2.1.tar.gz.
File metadata
- Download URL: uni_agent_sdk-0.2.1.tar.gz
- Upload date:
- Size: 127.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b30b11e94463792d2ed0d0d6fbc6eab8b6032000da1c22386f44c7be80cdbbf8
|
|
| MD5 |
e32d3b83ec0493e54d0a15e249ed6912
|
|
| BLAKE2b-256 |
d1348c27b3e6de5000f7c5a234d99cffb00738dbd7d2fecf042f49d3a92f40c9
|
File details
Details for the file uni_agent_sdk-0.2.1-py3-none-any.whl.
File metadata
- Download URL: uni_agent_sdk-0.2.1-py3-none-any.whl
- Upload date:
- Size: 105.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
13fc80913cd116727ccc64708e0a2045bce32720c2b0e1373a516717d1d9d7f7
|
|
| MD5 |
8a18df0770e844eb65c9bb135ed56f6e
|
|
| BLAKE2b-256 |
65d946a4a3a724c324f425ce3b28059aa932558e8556ce7687e9181b0e28f01c
|