Load any clip model with a standardized interface
Project description
all_clip
Load any clip model with a standardized interface
Install
pip install all_clip
Python examples
from all_clip import load_clip
import torch
from PIL import Image
import pathlib
model, preprocess, tokenizer = load_clip("open_clip:ViT-B-32/laion2b_s34b_b79k", device="cpu", use_jit=False)
image = preprocess(Image.open(str(pathlib.Path(__file__).parent.resolve()) + "/CLIP.png")).unsqueeze(0)
text = tokenizer(["a diagram", "a dog", "a cat"])
with torch.no_grad(), torch.cuda.amp.autocast():
image_features = model.encode_image(image)
text_features = model.encode_text(text)
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
print("Label probs:", text_probs) # prints: [[1., 0., 0.]]
Checkout these examples to call this as a lib:
API
This module exposes a single function load_clip:
- clip_model CLIP model to load (default ViT-B/32). Specify it as
"open_clip:ViT-B-32/laion2b_s34b_b79k"to use the open_clip or"hf_clip:patrickjohncyh/fashion-clip"to use the hugging face clip model. - use_jit uses jit for the clip model (default True)
- warmup_batch_size warmup batch size (default 1)
- clip_cache_path cache path for clip (default None)
- device device (default None)
Related projects
- clip-retrieval to use clip for inference, and retrieval
- open_clip to train clip models
- CLIP_benchmark to evaluate clip models
Supported models
OpenAI
Specify the model as "ViT-B-32"
Openclip
"open_clip:ViT-B-32/laion2b_s34b_b79k" to use the open_clip
HF CLIP
"hf_clip:patrickjohncyh/fashion-clip" to use the hugging face
Deepsparse backend
DeepSparse is an inference runtime for fast sparse model inference on CPUs. There is a backend available within clip-retrieval by installing it with pip install deepsparse-nightly[clip], and specifying a clip_model with a prepended "nm:", such as "nm:neuralmagic/CLIP-ViT-B-32-256x256-DataComp-s34B-b86K-quant-ds" or "nm:mgoin/CLIP-ViT-B-32-laion2b_s34b_b79k-ds".
For development
Either locally, or in gitpod (do export PIP_USER=false there)
Setup a virtualenv:
python3 -m venv .env
source .env/bin/activate
pip install -e .
to run tests:
pip install -r requirements-test.txt
then
make lint
make test
You can use make black to reformat the code
python -m pytest -x -s -v tests -k "dummy" to run a specific test
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file all_clip-1.1.0.tar.gz.
File metadata
- Download URL: all_clip-1.1.0.tar.gz
- Upload date:
- Size: 6.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.18
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9f86c6e68e0d990f22e05f6e4db9be4b8e765738e5e4baba9b61b24bfe2bcfa4
|
|
| MD5 |
f3c1f46cbd23b114db612d995083575b
|
|
| BLAKE2b-256 |
1b098a411b8e80d72bb288505829aea717d5a1bef6f8cd8343b88e3971df34f6
|
File details
Details for the file all_clip-1.1.0-py3-none-any.whl.
File metadata
- Download URL: all_clip-1.1.0-py3-none-any.whl
- Upload date:
- Size: 7.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.8.18
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d6e74bee3f7f793707d0bb9847fd72d9700ab2ced3c5f67db8314ce1bf7dcd56
|
|
| MD5 |
6f3d64341144e8c1b155ca6c6a19e7da
|
|
| BLAKE2b-256 |
077d947f001881378381fbaea55eed73de8aa827e21c93bc8c20deb636414a26
|