Skip to main content

Python bindings for encoderfile.

Project description

Project logo

🚀 Overview

Encoderfile packages transformer encoders—optionally with classification heads—into a single, self-contained executable. No Python runtime, no dependencies, no network calls. Just a fast, portable binary that runs anywhere.

While Llamafile focuses on generative models, Encoderfile is purpose-built for encoder architectures with optional classification heads. It supports embedding, sequence classification, and token classification models—covering most encoder-based NLP tasks, from text similarity to classification and tagging—all within one compact binary.

Under the hood, Encoderfile uses ONNX Runtime for inference, ensuring compatibility with a wide range of transformer architectures.

Why?

  • Smaller footprint: a single binary measured in tens-to-hundreds of megabytes, not gigabytes of runtime and packages
  • Compliance-friendly: deterministic, offline, security-boundary-safe
  • Integration-ready: drop into existing systems as a CLI, microservice, or API without refactoring your stack

Encoderfiles can run as:

  • REST API
  • gRPC microservice
  • CLI for batch processing
  • MCP server (Model Context Protocol)

Architecture Diagram

Supported Architectures

Encoderfile supports the following Hugging Face model classes (and their ONNX-exported equivalents):

Task Supported classes Example models
Embeddings / Feature Extraction AutoModel, AutoModelForMaskedLM bert-base-uncased, distilbert-base-uncased
Sequence Classification AutoModelForSequenceClassification distilbert-base-uncased-finetuned-sst-2-english, roberta-large-mnli
Token Classification AutoModelForTokenClassification dslim/bert-base-NER, bert-base-cased-finetuned-conll03-english
  • ✅ All architectures must be encoder-only transformers — no decoders, no encoder–decoder hybrids (so no T5, no BART).
  • ⚙️ Models must have ONNX-exported weights (path/to/your/model/model.onnx).
  • 🧠 The ONNX graph input must include input_ids and optionally attention_mask.
  • 🚫 Models relying on generation heads (AutoModelForSeq2SeqLM, AutoModelForCausalLM, etc.) are not supported.
  • XLNet, Transformer XL, and derivative architectures are not yet supported.

📦 Installation

Option 1: Download Pre-built CLI Tool (Recommended)

Download the encoderfile CLI tool to build your own model binaries:

curl -fsSL https://raw.githubusercontent.com/mozilla-ai/encoderfile/main/install.sh | sh

Note for Windows users: Pre-built binaries are not available for Windows. Please see our guide on building from source for instructions on building from source.

Move the binary to a location in your PATH:

# Linux/macOS
sudo mv encoderfile /usr/local/bin/

# Or add to your user bin
mkdir -p ~/.local/bin
mv encoderfile ~/.local/bin/

Option 2: Build CLI Tool from Source

See our guide on building from source for detailed instructions on building the CLI tool from source.

Quick build:

cargo build --bin encoderfile --release
./target/release/encoderfile --help

🚀 Quick Start

Step 1: Prepare Your Model

First, you need an ONNX-exported model. Export any HuggingFace model:

Requires Python 3.13+ for ONNX export

# Install optimum for ONNX export
pip install optimum[onnx]

# Export a sentiment analysis model
optimum-cli export onnx \
  --model distilbert-base-uncased-finetuned-sst-2-english \
  --task text-classification \
  ./sentiment-model

Step 2: Create Configuration File

Create sentiment-config.yml:

encoderfile:
  name: sentiment-analyzer
  path: ./sentiment-model
  model_type: sequence_classification
  output_path: ./build/sentiment-analyzer.encoderfile

Step 3: Build Your Encoderfile

Use the downloaded encoderfile CLI tool:

encoderfile build -f sentiment-config.yml

This creates a self-contained binary at ./build/sentiment-analyzer.encoderfile.

Step 4: Run Your Model

Start the server:

./build/sentiment-analyzer.encoderfile serve

The server will start on http://localhost:8080 by default.

Making Predictions

Sentiment Analysis:

curl -X POST http://localhost:8080/predict \
  -H "Content-Type: application/json" \
  -d '{
    "inputs": [
      "This is the cutest cat ever!",
      "Boring video, waste of time",
      "These cats are so funny!"
    ]
  }'

Response:

{
  "results": [
    {
      "logits": [0.00021549065, 0.9997845],
      "scores": [0.00021549074, 0.9997845],
      "predicted_index": 1,
      "predicted_label": "POSITIVE"
    },
    {
      "logits": [0.9998148, 0.00018516644],
      "scores": [0.9998148, 0.0001851664],
      "predicted_index": 0,
      "predicted_label": "NEGATIVE"
    },
    {
      "logits": [0.00014975034, 0.9998503],
      "scores": [0.00014975043, 0.9998503],
      "predicted_index": 1,
      "predicted_label": "POSITIVE"
    }
  ],
  "model_id": "sentiment-analyzer"
}

Embeddings:

curl -X POST http://localhost:8080/predict \
  -H "Content-Type: application/json" \
  -d '{
    "inputs": ["Hello world"],
    "normalize": true
  }'

Token Classification (NER):

curl -X POST http://localhost:8080/predict \
  -H "Content-Type: application/json" \
  -d '{
    "inputs": ["Apple Inc. is located in Cupertino, California"]
  }'

🎯 Usage Modes

Mode Command Default
REST API ./my-model.encoderfile serve http://localhost:8080
gRPC ./my-model.encoderfile serve localhost:50051
CLI ./my-model.encoderfile infer "text" stdout
MCP Server ./my-model.encoderfile mcp

Both HTTP and gRPC servers start by default. Use --disable-grpc or --disable-http to run only one.

See the CLI Reference for all server options, port configuration, and output formats.

📚 Documentation

🛠️ Building Custom Encoderfiles

Once you have the encoderfile CLI tool installed, you can build binaries from any compatible HuggingFace model.

See our guide on building from source for detailed instructions including:

  • How to export models to ONNX format
  • Configuration file options
  • Advanced features (Lua transforms, custom paths, etc.)
  • Troubleshooting tips

Quick workflow:

  1. Export your model to ONNX: optimum-cli export onnx ...
  2. Create a config file: config.yml
  3. Build the binary: encoderfile build -f config.yml
  4. Deploy anywhere: ./build/my-model.encoderfile serve

See our guide on building from source for detailed instructions.

🤝 Contributing

We welcome contributions! See CONTRIBUTING.md for guidelines.

Development Setup

# Clone the repository
git clone https://github.com/mozilla-ai/encoderfile.git
cd encoderfile

# Set up development environment
make setup

# Run tests
make test

# Build documentation 
make docs

📄 License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

🙏 Acknowledgments

💬 Community

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

encoderfile-0.6.0-cp313-abi3-manylinux_2_38_x86_64.whl (13.8 MB view details)

Uploaded CPython 3.13+manylinux: glibc 2.38+ x86-64

encoderfile-0.6.0-cp313-abi3-manylinux_2_38_aarch64.whl (13.2 MB view details)

Uploaded CPython 3.13+manylinux: glibc 2.38+ ARM64

encoderfile-0.6.0-cp313-abi3-macosx_11_0_arm64.whl (11.4 MB view details)

Uploaded CPython 3.13+macOS 11.0+ ARM64

encoderfile-0.6.0-cp313-abi3-macosx_10_12_x86_64.whl (12.7 MB view details)

Uploaded CPython 3.13+macOS 10.12+ x86-64

File details

Details for the file encoderfile-0.6.0-cp313-abi3-manylinux_2_38_x86_64.whl.

File metadata

  • Download URL: encoderfile-0.6.0-cp313-abi3-manylinux_2_38_x86_64.whl
  • Upload date:
  • Size: 13.8 MB
  • Tags: CPython 3.13+, manylinux: glibc 2.38+ x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.1 {"installer":{"name":"uv","version":"0.11.1","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for encoderfile-0.6.0-cp313-abi3-manylinux_2_38_x86_64.whl
Algorithm Hash digest
SHA256 46da427bc7a97eb5d8e355c4908c292c10fec275dc9255c0fb30785c7a8d52de
MD5 a0b37b262c80131dd3bb3f73a19091a8
BLAKE2b-256 e3fa550de04de6a09a5cf3ecfea48fcd15726acec52b738ac60d48c31f258d21

See more details on using hashes here.

File details

Details for the file encoderfile-0.6.0-cp313-abi3-manylinux_2_38_aarch64.whl.

File metadata

  • Download URL: encoderfile-0.6.0-cp313-abi3-manylinux_2_38_aarch64.whl
  • Upload date:
  • Size: 13.2 MB
  • Tags: CPython 3.13+, manylinux: glibc 2.38+ ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.1 {"installer":{"name":"uv","version":"0.11.1","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for encoderfile-0.6.0-cp313-abi3-manylinux_2_38_aarch64.whl
Algorithm Hash digest
SHA256 92987def3e473976a00f3383552d6560f7f28a6176d429b0a7d25504b1cd1d63
MD5 f95fb38694b6414b8765b7d130828052
BLAKE2b-256 7ec421170e7d40395c55d99ab605b65c6862dabcbd90b6a3fe7aab98e8d18d79

See more details on using hashes here.

File details

Details for the file encoderfile-0.6.0-cp313-abi3-macosx_11_0_arm64.whl.

File metadata

  • Download URL: encoderfile-0.6.0-cp313-abi3-macosx_11_0_arm64.whl
  • Upload date:
  • Size: 11.4 MB
  • Tags: CPython 3.13+, macOS 11.0+ ARM64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.1 {"installer":{"name":"uv","version":"0.11.1","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for encoderfile-0.6.0-cp313-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 5a9756f688070910d7d00a3eb148038886611ca28eb14346f48d4d2b1fb2be73
MD5 b3db911f2eed18b78a76348f8a40924e
BLAKE2b-256 73abb468a8a511aa6ccd672428022cdc4126b36c5ebb635566b653d3f6c6ed00

See more details on using hashes here.

File details

Details for the file encoderfile-0.6.0-cp313-abi3-macosx_10_12_x86_64.whl.

File metadata

  • Download URL: encoderfile-0.6.0-cp313-abi3-macosx_10_12_x86_64.whl
  • Upload date:
  • Size: 12.7 MB
  • Tags: CPython 3.13+, macOS 10.12+ x86-64
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.11.1 {"installer":{"name":"uv","version":"0.11.1","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}

File hashes

Hashes for encoderfile-0.6.0-cp313-abi3-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 9a8cb6b0811fd0c650d96e67df45da0888dfae8f8401a5997d35016fdc6deec6
MD5 9f14e6690a202ab5e30a073767524bd5
BLAKE2b-256 d79916cee1629217a68e2d236ea56e340b0b90a645c37c97a05c53362d34efa8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page