Skip to main content

Enterprise-grade AI framework for seamless LLM integration

Project description

ThinkAi - Enterprise AI Framework

基于FastAPI的企业级AI大模型集成框架 - 开箱即用,简单易用,功能全面

特性

  • 多模型支持 - 支持Ollama、OpenAI、通义千问、DeepSeek、Claude、Gemini等主流大模型
  • 统一接口 - 一次配置,多模型自由切换
  • 开箱即用 - 简单配置即可使用,无需复杂配置
  • 符合OpenAI标准 - 采用OpenAI兼容的API格式
  • 流式响应 - 支持SSE流式输出
  • 会话管理 - 内置多轮对话上下文管理
  • RAG支持 - 3行代码实现检索增强生成
  • Agent系统 - 内置ReAct Agent,支持工具调用
  • 中间件管道 - 日志、重试、缓存、限流
  • 企业级性能 - 异步架构,连接池,自动重试

安装

# 基础安装
pip install thinkai

# 安装特定Provider
pip install thinkai[ollama]
pip install thinkai[openai]
pip install thinkai[qwen]

# 安装全部依赖
pip install thinkai[all]

快速开始

1. 最简使用(3行代码)

from thinkai import ThinkAI

ai = ThinkAI(provider="ollama", model="llama3")
response = await ai.chat("你好")
print(response.content)

2. FastAPI集成

from fastapi import FastAPI
from thinkai import ThinkAI

app = FastAPI()
ai = ThinkAI(provider="ollama", model="llama3")

@app.post("/chat")
async def chat(message: str):
    response = await ai.chat(message)
    return {"content": response.content}

# 启动: uvicorn main:app --reload

3. 多模型配置与切换

from thinkai import ThinkAI

ai = ThinkAI(provider="ollama", model="llama3")

# 注册多个模型
ai.register_model("qwen", provider="qwen", model="qwen-turbo")
ai.register_model("deepseek", provider="deepseek", model="deepseek-chat")
ai.register_model("gpt4", provider="openai", model="gpt-4")

# 自由切换
response1 = await ai.chat("你好", model="llama3")
response2 = await ai.chat("你好", model="qwen")
response3 = await ai.chat("你好", model="deepseek")
response4 = await ai.chat("你好", model="gpt4")

4. 多轮对话(会话管理)

ai = ThinkAI()

async with ai.session() as session:
    response1 = await session.chat("你好,我想学习Python")
    response2 = await session.chat("有什么好的学习路径?")
    response3 = await session.chat("推荐一些资源吧")

5. 流式响应

ai = ThinkAI()

async for chunk in ai.chat_stream("讲一个故事"):
    if chunk.choices and chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)

6. RAG(检索增强生成)

from thinkai import ThinkAI
from thinkai.rag import RAGPipeline

ai = ThinkAI()

# 3行代码实现RAG
rag = RAGPipeline(
    documents=["./docs", "./knowledge"],
    ai_client=ai,
    chunk_size=500,
)

# 查询
answer = await rag.query("ThinkAi框架支持哪些AI模型?")
print(answer)

7. Agent(智能体)

from thinkai import ThinkAI
from thinkai.agent import ReActAgent, Tool

# 定义工具
@Tool(name="calculator", description="计算数学表达式")
def calculator(expression: str) -> str:
    return str(eval(expression))

@Tool(name="search", description="搜索信息")
async def search(query: str) -> str:
    # 实现搜索逻辑
    return "搜索结果"

ai = ThinkAI()

# 创建Agent
agent = ReActAgent(
    tools=[calculator, search],
    ai_client=ai,
    verbose=True,
)

# 运行任务
result = await agent.run("计算25*48,然后搜索Python的相关信息")
print(result)

支持的AI模型

Provider 模型 类型 配置方式
Ollama llama3, mistral, qwen等 本地 provider="ollama"
OpenAI gpt-4, gpt-3.5-turbo, gpt-4o 云端 provider="openai"
通义千问 qwen-turbo, qwen-plus, qwen-max 云端 provider="qwen"
DeepSeek deepseek-chat, deepseek-coder 云端 provider="deepseek"
Anthropic claude-3-opus/sonnet/haiku 云端 provider="claude"
Google gemini-pro, gemini-ultra 云端 provider="gemini"

项目结构

thinkai/
├── thinkai/
│   ├── __init__.py
│   ├── core/           # 核心模块
│   │   ├── client.py   # 统一客户端
│   │   ├── config.py   # 配置管理
│   │   └── models.py   # 数据模型
│   ├── providers/      # Provider实现
│   │   ├── base.py     # Provider基类
│   │   ├── registry.py # 注册表
│   │   ├── ollama.py   # Ollama
│   │   ├── openai.py   # OpenAI
│   │   ├── qwen.py     # 通义千问
│   │   └── deepseek.py # DeepSeek
│   ├── session/        # 会话管理
│   ├── prompt/         # Prompt模板
│   ├── middleware/     # 中间件
│   ├── rag/            # RAG模块
│   ├── agent/          # Agent模块
│   ├── streaming.py    # 流式处理
│   └── exceptions.py   # 异常定义
├── examples/           # 示例代码
├── config.example.yaml # 配置示例
├── .env.example        # 环境变量示例
└── pyproject.toml      # 项目配置

配置方式

方式1: 代码配置

ai = ThinkAI(
    provider="ollama",
    model="llama3",
    temperature=0.7,
    max_tokens=2048,
    timeout=60,
)

方式2: 环境变量

export THINKAI_DEFAULT_PROVIDER=ollama
export THINKAI_DEFAULT_MODEL=llama3
export OPENAI_API_KEY=your_key_here

方式3: YAML配置文件

default_provider: "ollama"
default_model: "llama3"

providers:
  openai:
    api_key: "${OPENAI_API_KEY}"
    api_base: "https://api.openai.com/v1"

models:
  llama3:
    provider: "ollama"
    model: "llama3"
    temperature: 0.7
from thinkai.core.config import Settings

config = Settings.from_file("config.yaml")
ai = ThinkAI(config=config)

高级功能

中间件系统

from thinkai.middleware import LoggingMiddleware, RetryMiddleware

ai = ThinkAI()
ai.add_middleware(LoggingMiddleware())
ai.add_middleware(RetryMiddleware(max_retries=3))

Prompt模板

from thinkai.prompt.template import PromptTemplate, prompt_manager

# 使用内置模板
template = prompt_manager.get("system_code")
prompt = template.format()

# 自定义模板
custom = PromptTemplate("将以下代码转换为$type: $code")
result = custom.format(type="Python", code="...")

自定义Provider

from thinkai.providers.base import BaseProvider
from thinkai.providers.registry import register_provider

@register_provider("custom")
class CustomProvider(BaseProvider):
    name = "custom"
    default_model = "custom-model"
    
    async def chat(self, request):
        # 实现聊天逻辑
        pass
    
    async def chat_stream(self, request):
        # 实现流式聊天逻辑
        pass

企业级特性

  • 异步架构 - 全面使用async/await,高性能
  • 连接池 - HTTP连接复用
  • 自动重试 - 失败自动重试,指数退避
  • 错误处理 - 完善的异常体系
  • 类型安全 - 完整的Type Hints
  • 日志记录 - 结构化日志支持
  • 监控指标 - 集成Prometheus(规划中)
  • 负载均衡 - 多模型路由(规划中)

文档

完整文档请访问: https://thinkai.readthedocs.io

示例

运行示例代码:

# 基础使用
python examples/basic_usage.py

# FastAPI集成
python examples/fastapi_demo.py

# RAG示例
python examples/rag_example.py

# Agent示例
python examples/agent_example.py

贡献

欢迎提交Issue和Pull Request!

许可证

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

thinkai_framework-0.1.0.tar.gz (39.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

thinkai_framework-0.1.0-py3-none-any.whl (44.6 kB view details)

Uploaded Python 3

File details

Details for the file thinkai_framework-0.1.0.tar.gz.

File metadata

  • Download URL: thinkai_framework-0.1.0.tar.gz
  • Upload date:
  • Size: 39.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.13

File hashes

Hashes for thinkai_framework-0.1.0.tar.gz
Algorithm Hash digest
SHA256 0803d4aa0cb891a3d10064912974d5f64425c537520914bc4a977cde615f338c
MD5 ef921b022e4a44508d27495967ab0b3f
BLAKE2b-256 79b45e3d4928ab65f3373a27c5ef9e6d384c457966a608416ea3cb285c75c942

See more details on using hashes here.

File details

Details for the file thinkai_framework-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for thinkai_framework-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 de883aab66bdae03e6ce98c491ab8a58fb751d846b0414a323c99ec9a7918850
MD5 046a270d0d1915d300c6c616ecd9d523
BLAKE2b-256 fec2249023a7f4f9c8a4dab8ecff10d8cc18cf7bdb2b3d5e2b35de42fe97bf5f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page