Skip to main content

A lightweight LLM inference framework

Project description

light-llm-hp - 轻量级 LLM 推理框架

在 CPU 上运行的简化推理框架,支持 REST API 服务。

🚀 Apple Silicon 优化: 支持 MLX 后端,比 PyTorch MPS 快 2-5 倍

快速开始

from hllm import HLLM

# 自动选择最佳后端 (Apple Silicon 上自动使用 MLX)
model = HLLM(model_path="microsoft/Phi-3-mini-4k-instruct")

# 生成文本
result = model.generate("Write a short story about a robot.")
print(result)

Apple Silicon 优化 (MLX)

在 M1/M2/M3 Mac 上,使用 MLX 后端可获得最佳性能:

# 安装 MLX 支持
pip install light-llm-hp[mlx]
from hllm import HLLM

# 显式使用 MLX 后端 (推荐)
model = HLLM(model_path="mlx-community/Llama-3.2-1B-Instruct-4bit", backend="mlx")

# 或使用 PyTorch MPS
model = HLLM(model_path="microsoft/Phi-3-mini-4k-instruct", backend="pytorch", device="mps")

# 查看后端信息
print(model.get_info())
# {'name': 'mlx', 'device': 'mlx', ...}

性能对比

在 M1 MacBook Pro 上的典型性能 (Llama-3.2-1B, 100 tokens):

后端 首 token 延迟 吞吐量 内存占用
MLX ~50ms ~45 tok/s ~800MB
PyTorch MPS ~150ms ~15 tok/s ~1200MB
PyTorch CPU ~500ms ~5 tok/s ~1200MB

运行基准测试:

python examples/benchmark.py

REST API 服务 (OpenAI 兼容)

安装 API 依赖

pip install light-llm-hp[api]

启动服务

python -m hllm.server --model ./TinyLlama-1.1B-Chat-v1.0 --port 8000

使用 OpenAI 官方客户端

import httpx
from openai import OpenAI

# 禁用代理避免 502 错误
http_client = httpx.Client(trust_env=False)

client = OpenAI(
    base_url="http://localhost:8000/v1",
    api_key="not-needed",
    http_client=http_client
)

# 对话
response = client.chat.completions.create(
    model="hllm-model",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)

完整示例:examples/test_openai_client.py

OpenAI 兼容端点

端点 方法 说明
/v1/models GET 模型列表
/v1/chat/completions POST 对话补全 (支持流式)
/v1/completions POST 文本补全 (支持流式)

详细 API 文档见 docs/api.md

目录结构

hllm/
├── hllm/              # 核心模块
│   ├── __init__.py
│   ├── model.py       # 模型加载与推理
│   ├── tokenizer.py   # 分词器封装
│   ├── generate.py    # 生成逻辑
│   ├── server.py      # REST API 服务端
│   └── client.py      # REST API 客户端
├── tests/             # 测试
├── examples/          # 示例
└── docs/              # 文档

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

light_llm_hp-0.4.2.tar.gz (19.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

light_llm_hp-0.4.2-py3-none-any.whl (19.0 kB view details)

Uploaded Python 3

File details

Details for the file light_llm_hp-0.4.2.tar.gz.

File metadata

  • Download URL: light_llm_hp-0.4.2.tar.gz
  • Upload date:
  • Size: 19.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for light_llm_hp-0.4.2.tar.gz
Algorithm Hash digest
SHA256 a197525ea1abf82bc71e4ca514f22acf9eeb28acd5c28258c262355f3ada211e
MD5 c7f5eb549057a3adc0b722a3dc2b66f8
BLAKE2b-256 600847da5a4494090578df2a82b4c15263ce8694cef768beb048ab11a3b16313

See more details on using hashes here.

File details

Details for the file light_llm_hp-0.4.2-py3-none-any.whl.

File metadata

  • Download URL: light_llm_hp-0.4.2-py3-none-any.whl
  • Upload date:
  • Size: 19.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for light_llm_hp-0.4.2-py3-none-any.whl
Algorithm Hash digest
SHA256 3025047be3494a78143be36d3095f170c87440425549a1f7710622928016bd63
MD5 445488bd436e26b5af34842b4cb0ea01
BLAKE2b-256 82036def131b8a9e1b071c9131c2b3cb4dbca6aa3c89ffcfdde132ee9a5ff52e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page