Skip to main content

The fastest inference framework to run BitNet models on CPUs

Project description

Trillim

What is Trillim?

Quick Start

Installation

  • Python 3.12+ required
  • Install with uv (recommended) or pip

Pick your platform for full instructions:

Note: The rest of this README shows bare trillim commands. If you're using uv, prefix each command with uv run (e.g. uv run trillim chat ...).

Quantize your own model

If you have a HuggingFace BitNet model with safetensors weights:

# Quantize model weights → qmodel.tensors + rope.cache
trillim quantize <path-to-model> --model

# Optionally extract a PEFT LoRA adapter → qmodel.lora
trillim quantize <path-to-model> --adapter <path-to-adapter>

Chat

Start an interactive conversation in your terminal:

trillim chat Trillim/BitNet-TRNQ

Multi-turn conversations are supported with automatic prompt caching for fast follow-ups. Use /new to start a fresh conversation, or q to quit.

See the Chat guide for details on LoRA adapters, sampling parameters, and performance tips.

Search-Augmented Chat

Trillim supports pluggable inference harnesses. For web-search-enabled models, use:

trillim chat Trillim/BitNet-Search-TRNQ --harness search

By default, search uses DuckDuckGo (ddgs). To use Brave:

export SEARCH_API_KEY=<your_api_key>
trillim chat Trillim/BitNet-Search-TRNQ --harness search --search-provider brave

The search harness emits status markers while it runs search and synthesis steps. See Chat for full behavior and troubleshooting.

API Server

Trillim includes an OpenAI-compatible API server:

# Start the server
trillim serve Trillim/BitNet-TRNQ

# With voice pipeline (speech-to-text + text-to-speech)
# Requires optional `voice` dependencies:
# docs/server.md -> "Voice Optional Dependencies"
trillim serve Trillim/BitNet-TRNQ --voice

Endpoints:

  • POST /v1/chat/completions — chat completions (streaming supported)
  • POST /v1/completions — text completions
  • GET /v1/models — list loaded models
  • POST /v1/models/load — hot-swap models, LoRA adapters, and harness/search settings at runtime
  • POST /v1/audio/transcriptions — speech-to-text (with --voice)
  • POST /v1/audio/speech — text-to-speech (with --voice)
  • GET /v1/voices — list available TTS voices
  • POST /v1/voices — register a custom voice from audio (see Voice Cloning Setup)

For server-side search harness, start normally and then set "harness": "search" (plus optional "search_provider") through POST /v1/models/load.

Works with the OpenAI Python client out of the box:

from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1", api_key="unused")
response = client.chat.completions.create(
    model="BitNet-TRNQ",
    messages=[{"role": "user", "content": "Hello!"}],
)

See the Server guide for full endpoint documentation, request/response schemas, the Python SDK, and voice pipeline usage.

LoRA Adapters

Trillim supports PEFT LoRA adapters as bf16 corrections on top of the ternary base model. The adapter lives in its own directory (separate from the base model) and must be quantized first:

# Quantize a PEFT adapter into Trillim's format
trillim quantize <path-to-base-model> --adapter <path-to-adapter>

# Chat with the base model + adapter
trillim chat Trillim/BitNet-TRNQ --lora <adapter-dir>

# Or pull a pre-quantized adapter and use it by ID
trillim pull Trillim/BitNet-GenZ-LoRA-TRNQ
trillim chat Trillim/BitNet-TRNQ --lora Trillim/BitNet-GenZ-LoRA-TRNQ

Adapters can also be hot-swapped at runtime via the API server's POST /v1/models/load endpoint. See the Server guide for details.

Runtime Quantization

Separately from the offline trillim quantize step (which converts model weights to ternary), Trillim can quantize specific layers at inference time to reduce memory usage. This is controlled with two flags available on both chat and serve:

  • --lora-quant <type> — quantize LoRA adapter layers. Options: none, int8, q4_0, q5_0, q6_k, q8_0. Only applies when using --lora.
  • --unembed-quant <type> — quantize the unembedding (output projection) layer. Options: int8, q4_0, q5_0, q6_k, q8_0.
# Quantize LoRA layers to int8 for lower memory
trillim chat Trillim/BitNet-TRNQ --lora <adapter-dir> --lora-quant int8

# Quantize the unembed layer to q4_0
trillim chat Trillim/BitNet-TRNQ --unembed-quant q4_0

# Both at once
trillim serve Trillim/BitNet-TRNQ --lora-quant q8_0 --unembed-quant q4_0

Lower quantization levels (e.g. q4_0) use less memory at a small quality cost. These options can also be set per-request when hot-swapping models via POST /v1/models/load. See the CLI reference for the full flag list.

Voice Cloning Setup

The voice pipeline (--voice) includes 8 predefined voices that work out of the box: alba, marius, javert, jean, fantine, cosette, eponine, azelma.

To register custom voices (voice cloning via POST /v1/voices), you need to accept the PocketTTS model terms and authenticate with HuggingFace:

  1. Go to kyutai/pocket-tts on HuggingFace and accept the model's terms.
  2. Create a token on HuggingFace (under Access Tokens) with Read permissions.
  3. Log in locally so the token is available to download the voice cloning weights:
hf auth login

This only needs to be done once. After that, custom voice registration works automatically. If you skip this step, you'll get an error when trying to register a custom voice — predefined voices will still work fine.

Supported Architectures

  • BitnetForCausalLM — BitNet with ternary weights and ReLU² activation
  • LlamaForCausalLM — Llama-style with SiLU activation

Platform Support

Platform Status
x86_64 (AVX2) Supported
ARM64 (NEON) Supported

Thread count is auto-detected as num_cores - 2. Override by passing a --threads N CLI argument.

Documentation

License

The Trillim Python SDK source code is MIT-licensed. The C++ inference engine binaries (inference, trillim-quantize) bundled in the pip package are proprietary — you may use them as part of Trillim but may not reverse-engineer or redistribute them separately. See LICENSE for full terms.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

trillim-0.5.0-py3-none-win_arm64.whl (1.6 MB view details)

Uploaded Python 3Windows ARM64

trillim-0.5.0-py3-none-win_amd64.whl (1.8 MB view details)

Uploaded Python 3Windows x86-64

trillim-0.5.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (538.7 kB view details)

Uploaded Python 3manylinux: glibc 2.17+ x86-64

trillim-0.5.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (553.3 kB view details)

Uploaded Python 3manylinux: glibc 2.17+ ARM64

trillim-0.5.0-py3-none-macosx_11_0_x86_64.whl (1.4 MB view details)

Uploaded Python 3macOS 11.0+ x86-64

trillim-0.5.0-py3-none-macosx_11_0_arm64.whl (1.5 MB view details)

Uploaded Python 3macOS 11.0+ ARM64

File details

Details for the file trillim-0.5.0-py3-none-win_arm64.whl.

File metadata

  • Download URL: trillim-0.5.0-py3-none-win_arm64.whl
  • Upload date:
  • Size: 1.6 MB
  • Tags: Python 3, Windows ARM64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.0

File hashes

Hashes for trillim-0.5.0-py3-none-win_arm64.whl
Algorithm Hash digest
SHA256 fdb1bb6e4964cb2d61a207357622b8ea4fd6daba156d0fbc40553392a94b20a4
MD5 dce1ba7cf330950b8c0a3d3ff29202d8
BLAKE2b-256 a62cc35cd65cf8463e58d682492197aad94f4b621ca704b433a976b812828241

See more details on using hashes here.

File details

Details for the file trillim-0.5.0-py3-none-win_amd64.whl.

File metadata

  • Download URL: trillim-0.5.0-py3-none-win_amd64.whl
  • Upload date:
  • Size: 1.8 MB
  • Tags: Python 3, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.0

File hashes

Hashes for trillim-0.5.0-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 0cdb479ab7a8c49cdbbc2a1c70c77311560d248462654915aa6a826a0c2a8520
MD5 ff0fadf6901e3c6852ccb4bad57fe32d
BLAKE2b-256 fd75802cdf47decee399039b2c1ffd71ab1ba6ee746f635e3d2977ceba7c1f39

See more details on using hashes here.

File details

Details for the file trillim-0.5.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for trillim-0.5.0-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 25aa505fbe55db14d1f7570ead3540b91c22059bd26ace7df8ba74cfd66f6a3a
MD5 a6452adb56e5e5c0cacc6f55e961763d
BLAKE2b-256 354813337260a569ae1df330d14c2a9bbcbc54d43d8006e730275e51b7054572

See more details on using hashes here.

File details

Details for the file trillim-0.5.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for trillim-0.5.0-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 3cb00276d1c6f3ec652e4af34d4a4e2647d08964c26556debdb81029a5097f27
MD5 db53d4b2c562c1d0fa046eb56f465c0f
BLAKE2b-256 b29b093caedd069974c81b19713cb600efff863d72d4a16d9b17a41106752666

See more details on using hashes here.

File details

Details for the file trillim-0.5.0-py3-none-macosx_11_0_x86_64.whl.

File metadata

File hashes

Hashes for trillim-0.5.0-py3-none-macosx_11_0_x86_64.whl
Algorithm Hash digest
SHA256 25079d600613137f9b12f4405252d9cb7626aa98e4028a56e4ae9cf8ba9c3f20
MD5 3d0744f790ed7b6585d85e54524ba416
BLAKE2b-256 388b7e72b207727420527f39fd928cff3e2d77c5f9b3bf74e1031535d450af47

See more details on using hashes here.

File details

Details for the file trillim-0.5.0-py3-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for trillim-0.5.0-py3-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 97d0c0434443264c17a0cc17edc35af0c12dc2bcf6bdfc9d281219a0767a6a84
MD5 3766ef7a2a422513818dba53f14066e8
BLAKE2b-256 ad031de60b5e94b5285d4025ad0137a97ea8af7a349435b0103578ad859e8818

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page