Skip to main content

轻量级的 LLM 响应格式化工具,支持标准响应和流式响应

Project description

LLMRF

PyPI version License: MIT Python 3.6+

LLMRF (LLM Response Formatter) 是一个轻量级的 Python 库,用于将 LLM 输出格式化为标准的 OpenAI API 响应格式。

✨ 特性

  • 🚀 简单易用的 API
  • 📦 支持标准和流式响应格式
  • 🔧 完全可自定义的参数
  • 🎯 兼容 OpenAI API 格式

🛠️ 安装

pip install llmrf

📖 使用示例

基础用法

from llmrf import RF
import json

rf = RF()
# 普通响应
response = rf.f_r("Hello, World!")
print(json.dumps(response, indent=2))  # 格式化输出

# 流式响应
stream = rf.f_r("Hello, World!", stream=True)
print(stream)

输出示例

普通响应输出:

{
  "id": "chatcmpl-123e4567-e89b-12d3-a456-426614174000",
  "object": "chat.completion",
  "created": 1707139200,
  "model": "gpt-3.5-turbo",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello, World!"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 12,
    "total_tokens": 24
  }
}

流式响应输出:

data: {"id": null, "object": "chat.completion.chunk", "created": null, "model": "gpt-3.5-turbo", "choices": [{"index": 0, "delta": {"content": "Hello, World!", "role": null}, "finish_reason": null}], "usage": null}

自定义参数

response = rf.f_r(
    content="Hello, World!",
    model="custom-model",
    id="custom-id"
)

格式化输出提示

为了获得更好的可读性,建议使用 json.dumps() 格式化输出:

import json

response = rf.f_r("Hello, World!")
# 使用 indent 参数美化输出
print(json.dumps(response, indent=2, ensure_ascii=False))

📚 API 文档

RF.f_r()

主要格式化方法,支持以下参数:

参数 类型 必需 默认值 描述
content str - 要格式化的文本内容
stream bool False 是否使用流式响应
model str "gpt-3.5-turbo" 模型名称
id str 自动生成 响应 ID
created int 当前时间戳 创建时间

🎯 使用场景

  • 自定义 LLM 服务接口标准化
  • API 响应格式转换
  • 流式输出格式化
  • LLM 响应模拟测试

🤝 贡献

欢迎提交 Pull Requests!对于重大更改,请先开 issue 讨论您想要改变的内容。

开源协议

MIT License

问题反馈

如果您发现任何问题或有改进建议,欢迎在 GitHub 上提交 issue。

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmrf-0.1.4.tar.gz (3.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llmrf-0.1.4-py3-none-any.whl (3.5 kB view details)

Uploaded Python 3

File details

Details for the file llmrf-0.1.4.tar.gz.

File metadata

  • Download URL: llmrf-0.1.4.tar.gz
  • Upload date:
  • Size: 3.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for llmrf-0.1.4.tar.gz
Algorithm Hash digest
SHA256 0a3fead73a5f19a3a5403729404ddf53b9e224bb56de6916229fbced380924b1
MD5 31a1787a4ea781c34fe3cdcbc4c91a31
BLAKE2b-256 40fb05307747bb14cac7319c938228c32d630d31c1df0a0cfb95c0e81191f624

See more details on using hashes here.

File details

Details for the file llmrf-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: llmrf-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 3.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.5

File hashes

Hashes for llmrf-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 3384159dc9812e2d4171825484b3d4180154eb606e183231985e748aa6482b01
MD5 d7eb5810ff0f68c490fa477cef9352ef
BLAKE2b-256 2cb0132d52059c0a42f2560b7c6ec6eb3c7488ce3346391a2e23bee5a2613146

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page