Skip to main content

OpenAI CLIP text encoders for multiple languages!

Project description


Multilingual-CLIP

OpenAI CLIP text encoders for any language

Live Demo · Pre-trained Models · Report Bug

Open In Colab pypi

Overview

Alt text

OpenAI recently released the paper Learning Transferable Visual Models From Natural Language Supervision in which they present the CLIP (Contrastive Language–Image Pre-training) model. This model is trained to connect text and images, by matching their corresponding vector representations using a contrastive learning objective. CLIP consists of two separate models, a visual encoder and a text encoder. These were trained on a wooping 400 Million images and corresponding captions. OpenAI has since released a set of their smaller CLIP models, which can be found on the official CLIP Github.

Demo

A live demonstration of multilingual Text-Image retrieval using M-CLIP can be found here! This demo was created by Rom1504, and it allows you to search the LAION-400M dataset in various languages using M-CLIP.

This repository contains

  • Pre-trained CLIP-Text encoders for multiple languages
  • Pytorch & Tensorflow inference code
  • Tensorflow training code

Requirements

While it is possible that other versions works equally fine, we have worked with the following:

  • Python = 3.6.9
  • Transformers = 4.8.1

Install

pip install multilingual-clip torch

You can also choose to pip install tensorflow instead of torch.

Inference Usage

Inference code for Tensorflow is also available in inference_example.py

from multilingual_clip import pt_multilingual_clip
import transformers

texts = [
    'Three blind horses listening to Mozart.',
    'Älgen är skogens konung!',
    'Wie leben Eisbären in der Antarktis?',
    'Вы знали, что все белые медведи левши?'
]
model_name = 'M-CLIP/XLM-Roberta-Large-Vit-L-14'

# Load Model & Tokenizer
model = pt_multilingual_clip.MultilingualCLIP.from_pretrained(model_name)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)

embeddings = model.forward(texts, tokenizer)
print(embeddings.shape)

Install for development

Setup a virtualenv:

python3 -m venv .env
source .env/bin/activate
pip install -e .

Pre-trained Models

Every text encoder is a Huggingface available transformer, with an additional linear layer on top. For more information of a specific model, click the Model Name to see its model card.

Name Model Base Vision Model Vision Dimensions Pre-trained Languages #Parameters
LABSE Vit-L/14 LaBSE OpenAI ViT-L/14 768 109 Languages 110 M
XLM-R Large Vit-B/32 XLM-Roberta-Large OpenAI ViT-B/32 512 100 Languages 344 M
XLM-R Large Vit-L/14 XLM-Roberta-Large OpenAI ViT-L/14 768 100 Languages 344 M
XLM-R Large Vit-B/16+ XLM-Roberta-Large Open CLIP ViT-B-16-plus-240 640 100 Languages 344 M

Validation & Training Curves

Following is a table of the Txt2Img @10-Recal for the humanly tanslated MS-COCO testset.

Name En De Es Fr Zh It Pl Ko Ru Tr Jp
OpenAI CLIP Vit-B/32 90.3 - - - - - - - - - -
OpenAI CLIP Vit-L/14 91.8 - - - - - - - - - -
OpenCLIP ViT-B-16+- 94.3 - - - - - - - - - -
LABSE Vit-L/14 91.6 89.6 89.5 89.9 88.9 90.1 89.8 80.8 85.5 89.8 73.9
XLM-R Large Vit-B/32 91.8 88.7 89.1 89.4 89.3 89.8 91.4 82.1 86.1 88.8 81.0
XLM-R Vit-L/14 92.4 90.6 91.0 90.0 89.7 91.1 91.3 85.2 85.8 90.3 81.9
XLM-R Large Vit-B/16+ 95.0 93.0 93.6 93.1 94.0 93.1 94.4 89.0 90.0 93.0 84.2

The training curves for these models are available at this Weights and Biases Report, the results for other non-succesfull and ongoing experiments can be found in the Weights and Biases Project.

Legacy Usage and Models

Older versions of M-CLIP had the linear weights stored separately from Huggingface. Whilst the new models have them directly incorporated in the Huggingface repository. More information about these older models can be found in this section.

Click for more information
Download CLIP Model
$ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0
$ pip install ftfy regex tqdm
$ pip install git+https://github.com/openai/CLIP.git

Replace cudatoolkit=11.0 above with the appropriate CUDA version on your machine or cpuonly when installing on a machine without a GPU. For more information please see the official CLIP repostitory.

Download Linear Weights
# Linear Model Weights
$ bash legacy_get-weights.sh

Inference

from multilingual_clip import multilingual_clip

print(multilingual_clip.AVAILABLE_MODELS.keys())

model = multilingual_clip.load_model('M-BERT-Distil-40')

embeddings = model(['Älgen är skogens konung!', 'Wie leben Eisbären in der Antarktis?', 'Вы знали, что все белые медведи левши?'])
print(embeddings.shape)
# Yields: torch.Size([3, 640])

For a more elaborate example, comparing the textual embeddings to the CLIP image embeddings see this colab notebook.

Legacy Pre-trained Models

Every text encoder is a Huggingface available transformer, with an additional linear layer on top. Neither of the models have been extensively tested, but for more information and qualitative test results for a specific model, click the Model Name to see its model card.

*** Make sure to update to the most recent version of the repostitory when downloading a new model, and re-run the shell script to download the Linear Weights. ***

Name Model Base Vision Model Pre-trained Languages Target Languages #Parameters
Multilingual
M-BERT Distil 40 M-BERT Distil RN50x4 101 Languages 40 Languages 66 M
M-BERT Base 69 M-BERT Base RN50x4 101 Languages 68 Languages 110 M
M-BERT Base ViT-B M-BERT Base ViT-B/32 101 Languages 68 Languages 110 M
Monolingual
Swe-CLIP 500k KB-BERT RN50x4 Swedish Swedish 110 M
Swe-CLIP 2M KB-BERT RN50x4 Swedish Swedish 110 M

Training a new model

This folder contains the code used for training the above models. If you wsh to train your own model you must do the following things:

  • Prepare a set of translated sentence pairs from English -> Your Language(s)
  • Compute regular CLIP-Text embeddings for the English sentences.
  • Edit Training.py to load your data.
  • Train a new CLIP-Text encoder via Teacher Learning

Pre-computed CLIP Embeddings & Translaton Data

This Google Drive folder contains both pre-computed CLIP-Text Embeddings for a large porton of the the image captions of GCC + MSCOCO + VizWiz.

The Google Drive folder also contains the translation data used to train the currently available models. Good Luck

Contribution

If you have trained a CLIP Text encoder specific to your language, or another model covering a language not supported here, Please feel free to contact us and we will either upload your model and credit you, or simply link to your already uploaded model.

Contact

If you have questions regarding the code or otherwise related to this Github page, please open an issue.

For other purposes, feel free to contact me directly at: Fredrik.Carlsson@ri.se

Acknowledgements

License

Distributed under the MIT License. See LICENSE for more information.

Citing

If you found this repository useful, please consider citing:

@InProceedings{carlsson-EtAl:2022:LREC,
  author    = {Carlsson, Fredrik  and  Eisen, Philipp  and  Rekathati, Faton  and  Sahlgren, Magnus},
  title     = {Cross-lingual and Multilingual CLIP},
  booktitle      = {Proceedings of the Language Resources and Evaluation Conference},
  month          = {June},
  year           = {2022},
  address        = {Marseille, France},
  publisher      = {European Language Resources Association},
  pages     = {6848--6854},
  abstract  = {The long-standing endeavor of relating the textual and the visual domain recently underwent a pivotal breakthrough, as OpenAI released CLIP. This model distinguishes how well an English text corresponds with a given image with unprecedented accuracy. Trained via a contrastive learning objective over a huge dataset of 400M of images and captions, it is a work that is not easily replicated, especially for low resource languages. Capitalizing on the modularization of the CLIP architecture, we propose to use cross-lingual teacher learning to re-train the textual encoder for various non-English languages. Our method requires no image data and relies entirely on machine translation which removes the need for data in the target language. We find that our method can efficiently train a new textual encoder with relatively low computational cost, whilst still outperforming previous baselines on multilingual image-text retrieval.},
  url       = {https://aclanthology.org/2022.lrec-1.739}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mclip_anytorch-1.0.13.tar.gz (15.1 kB view details)

Uploaded Source

Built Distribution

mclip_anytorch-1.0.13-py3-none-any.whl (22.5 kB view details)

Uploaded Python 3

File details

Details for the file mclip_anytorch-1.0.13.tar.gz.

File metadata

  • Download URL: mclip_anytorch-1.0.13.tar.gz
  • Upload date:
  • Size: 15.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.3 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.12 tqdm/4.64.0 importlib-metadata/4.8.3 keyring/23.4.1 rfc3986/1.5.0 colorama/0.4.5 CPython/3.6.15

File hashes

Hashes for mclip_anytorch-1.0.13.tar.gz
Algorithm Hash digest
SHA256 233931055030829da1f425921c2a2641abe18747c340c7e61e98f4e4ee69d28b
MD5 8e7fa8b38d839b09c189e46a7c92db2b
BLAKE2b-256 d111eb55dbdeaaa172d2644f62f4e0bb83cd99dbe3a3a231aff0c58e9fa62a43

See more details on using hashes here.

File details

Details for the file mclip_anytorch-1.0.13-py3-none-any.whl.

File metadata

  • Download URL: mclip_anytorch-1.0.13-py3-none-any.whl
  • Upload date:
  • Size: 22.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.3 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.12 tqdm/4.64.0 importlib-metadata/4.8.3 keyring/23.4.1 rfc3986/1.5.0 colorama/0.4.5 CPython/3.6.15

File hashes

Hashes for mclip_anytorch-1.0.13-py3-none-any.whl
Algorithm Hash digest
SHA256 25e600bf199753c595d468b510a3e6f6e644f22e28e589c7d3c9ef4848413137
MD5 dd437af4816edd2f3603a6a15ce49a21
BLAKE2b-256 c232e47d088846db0fcbb006f52fb1ee71aa68f1ea148491a4f9ae3470213da6

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page