Skip to main content

A production-ready, zero-dependency ONNX runtime for OpenAI CLIP and OpenCLIP models.

Project description

Modern ONNX CLIP

A modern, lightweight, and robust ONNX runtime for CLIP models.

This library allows you to run OpenAI CLIP and OpenCLIP models in production environments without installing PyTorch. It provides a simple CLI to convert models from the massive OpenCLIP model zoo and a pure-Python inference engine powered by onnxruntime, numpy, and pillow.

🚀 Features

  • Zero PyTorch Dependency in Production: Run inference with just numpy and onnxruntime. Drastically reduces Docker image size and memory usage.
  • Easy Conversion: Convert any model from OpenCLIP (ViT-B-32, ViT-L-14, SigLIP, etc.) with a single command.
  • Modern Tooling: Built with uv, ruff, and strictly typed with pyright.
  • Fast: Leverages ONNX Runtime (CPU or CUDA) for high-performance inference.
  • Drop-in Replacement: Designed to replace the unmaintained onnx_clip package with better model support.

📦 Installation

For Production (Inference Only)

If you only need to run models, install the base package. This does not install PyTorch.

uv add "modern-onnx-clip[cpu]"
# or for GPU support
uv add "modern-onnx-clip[gpu]"

For Development & Exporting

To convert models, you need the export dependencies (PyTorch, OpenCLIP).

pip install "modern-onnx-clip[export]"

🛠️ Usage

1. Convert a Model

First, convert a model from the OpenCLIP registry. You need the [export] extras installed for this step.

# Syntax: onnx-clip convert --model <ARCH> --pretrained <TAG> --output <DIR>

# Example: Standard ViT-B-32
onnx-clip convert --model ViT-B-32 --pretrained laion2b_s34b_b79k --output ./models/vit-b-32

# Example: ViT-L-14 (Higher accuracy)
onnx-clip convert --model ViT-L-14 --pretrained openai --output ./models/vit-l-14

This will create a folder containing visual.onnx, textual.onnx, and configuration files.

2. Run Inference (Python)

Now you can use the model in your application. This step works without PyTorch.

from onnx_clip import OnnxClip
from PIL import Image

# 1. Load the model (Provide the directory where you exported the model)
model = OnnxClip(model_dir="./models/vit-b-32", device="cpu")  # use 'cuda' for GPU

# 2. Get Image Embeddings
image = Image.open("cat.jpg")
image_features = model.get_image_embedding(image)
# shape: (1, 512)

# 3. Get Text Embeddings
text_features = model.get_text_embedding(["a photo of a cat", "a photo of a dog"])
# shape: (2, 512)

# 4. Calculate Similarity
# (The embeddings are already normalized)
similarity = image_features @ text_features.T
print(similarity)
# [[0.28, 0.15]]

3. CLI Inference (Testing)

You can also test a model directly from the CLI:

onnx-clip run --model-dir ./models/vit-b-32 --image cat.jpg --text "a cute cat"

⚙️ GPU Support

To run on NVIDIA GPUs, simply install with the [gpu] extra:

uv add "modern-onnx-clip[gpu]"

Then initialize the model with device="cuda".

🏗️ Project Structure

  • exporter.py: Handles loading PyTorch models and exporting them to ONNX graphs.
  • model.py: The lightweight inference engine. Abstraction over ONNX Runtime sessions.
  • preprocessor.py: Reimplementation of CLIP's image preprocessing using only NumPy and Pillow.
  • tokenizer.py: Handles text tokenization (BPE) without heavy external dependencies.

🧪 Development & Testing

We use pytest for testing.

Standard Tests

Run the standard test suite (ensure you have installed [cpu] or [gpu] extra):

uv run --extra cpu pytest

Manual Verification Tests

The tests/manual/ directory contains scripts to verify numerical consistency between this library (ONNX) and the original PyTorch CLIP. These tests are skipped by default if dependencies are missing. To run them:

  1. Install the clip library manually (it cannot be a package dependency due to PyPI restrictions):

    pip install git+https://github.com/openai/CLIP.git
    
  2. Export a model to a local directory (e.g., ../models/ViT-B-32):

    onnx-clip convert --model ViT-B-32 --pretrained laion2b_s34b_b79k --output ../models/ViT-B-32
    
  3. Set the environment variable and run:

    # Linux/Mac
    export ONNX_CLIP_MODEL_DIR="../models/ViT-B-32"
    pytest tests/manual/
    
    # Windows (PowerShell)
    $env:ONNX_CLIP_MODEL_DIR="../models/ViT-B-32"
    pytest tests/manual/
    

License

MIT License.

Acknowledgements

Built on top of the incredible work by OpenAI and OpenCLIP. Inspired by the original onnx_clip package.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

modern_onnx_clip-0.1.5.tar.gz (135.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

modern_onnx_clip-0.1.5-py3-none-any.whl (14.7 kB view details)

Uploaded Python 3

File details

Details for the file modern_onnx_clip-0.1.5.tar.gz.

File metadata

  • Download URL: modern_onnx_clip-0.1.5.tar.gz
  • Upload date:
  • Size: 135.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for modern_onnx_clip-0.1.5.tar.gz
Algorithm Hash digest
SHA256 08ab319341f8ed206c65c485e970a2635aebe9bd5d69e446d3ac487571ea7847
MD5 f3052e2d7f0a8fc07776a2aa39cc7528
BLAKE2b-256 eaf94f797c8962b9dd37c2f280c0e0e1398536222f2ba515a0a4e977d2197e30

See more details on using hashes here.

Provenance

The following attestation bundles were made for modern_onnx_clip-0.1.5.tar.gz:

Publisher: publish.yml on Neizvestnyj/modern-onnx-clip

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file modern_onnx_clip-0.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for modern_onnx_clip-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 ed9f5606bf9373d6fad99bd9b0898889a6e4c812e5f4458c443f3e4b53b5cea7
MD5 4166dab2d52092741f32240025802469
BLAKE2b-256 d0cb60007c2d0d49d58c93fd3c9e2b763196eef48e43234a20ee82fee06a0904

See more details on using hashes here.

Provenance

The following attestation bundles were made for modern_onnx_clip-0.1.5-py3-none-any.whl:

Publisher: publish.yml on Neizvestnyj/modern-onnx-clip

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page