Skip to main content

A flexible SDK for LLM and VLM model interactions with CLI support

Project description

Think LLM Client

一个灵活的 LLM 和 VLM 模型交互 SDK,支持基础的模型交互和 CLI 界面。

特性

  • 支持多种模型类型(LLM、VLM)
  • 支持多个提供商和模型
  • 提供基础的模型交互接口
  • 提供丰富的 CLI 界面
  • 支持图片分析和比较
  • 支持流式输出和思维链
  • 支持对话历史管理
  • 类型提示和文档完备

安装

使用 uv 安装(推荐):

uv pip install think-llm-client

或使用传统的 pip:

pip install think-llm-client

快速开始

基础用法

import asyncio
from think_llm_client import LLMClient

async def main():
    # 创建客户端
    client = LLMClient()
    
    # 设置模型
    client.set_model("llm", "openai", "gpt-4")
    
    # 基础对话
    reasoning, response = await client.chat("Python 中的装饰器是什么?")
    print(f"回答:{response}")
    
    # 图片分析
    reasoning, response = await client.analyze_image(
        "image.jpg",
        "分析这个产品的优缺点"
    )
    print(f"图片分析:{response}")

if __name__ == "__main__":
    asyncio.run(main())

CLI 界面

# 启动交互式对话
python -m think_llm_client.cli chat

# 分析图片
python -m think_llm_client.cli analyze image.jpg "描述这个图片"

配置

在项目根目录创建 config.json 文件:

{
  "model_types": {
    "llm": {
      "providers": {
        "openai": {
          "api_key": "your-api-key",
          "model": {
            "gpt-4": {
              "max_tokens": 2000
            },
            "gpt-3.5-turbo": {
              "max_tokens": 1000
            }
          }
        }
      }
    },
    "vlm": {
      "providers": {
        "openai": {
          "api_key": "your-api-key",
          "model": {
            "gpt-4-vision-preview": {
              "max_tokens": 1000
            }
          }
        }
      }
    }
  }
}

或者使用环境变量:

export OPENAI_API_KEY=your-api-key

高级特性

流式输出

async def main():
    client = LLMClient()
    client.set_model("llm", "openai", "gpt-4")
    
    # 启用流式输出
    async for chunk_type, chunk, full_content in client.chat_stream("解释量子计算"):
        if chunk_type == "reasoning":
            print(f"思维过程: {chunk}", end="")
        elif chunk_type == "content":
            print(f"回答: {chunk}", end="")

对话历史管理

# 保存对话历史
client.save_history("chat_history.json")

# 加载对话历史
client.load_history_from_file("chat_history.json")

开发

使用 uv 安装开发依赖(推荐):

uv pip install -e ".[dev]"

或使用传统的 pip:

pip install -e ".[dev]"

运行测试和代码检查:

# 运行测试
pytest

# 代码格式化
black .
ruff check .
mypy .

许可证

MIT License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

think_llm_client-0.1.0.tar.gz (12.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

think_llm_client-0.1.0-py3-none-any.whl (14.4 kB view details)

Uploaded Python 3

File details

Details for the file think_llm_client-0.1.0.tar.gz.

File metadata

  • Download URL: think_llm_client-0.1.0.tar.gz
  • Upload date:
  • Size: 12.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.16

File hashes

Hashes for think_llm_client-0.1.0.tar.gz
Algorithm Hash digest
SHA256 077cad11df2bc458933c4e00cf2805cb78fe176eb32c474b91b3993e4ddc758a
MD5 57f82237bfbf27c1a2ce74e36b516c61
BLAKE2b-256 791fd2c9289d9ba626c70b98b348483582b432f06dfdd0be4968450d4fd08dab

See more details on using hashes here.

File details

Details for the file think_llm_client-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for think_llm_client-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 701cebd425f4b53c2063af81e015fad22fc68aef3eea07de038de238aa129eb6
MD5 542d74f844a43f5199eea61a83ddd430
BLAKE2b-256 98ffc563ea0985ef296a8a4aee0d45d66ad52b42a2d505d9dfc04f0e107650cf

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page