Convert LaBSE model from TensorFlow to PyTorch.
Project description
LaBSE
Project
This project is an implementation to convert LaBSE from TensorFlow to PyTorch.
Model description
Language-agnostic BERT Sentence Encoder (LaBSE) is a BERT-based model trained for sentence embedding for 109 languages. The pre-training process combines masked language modeling with translation language modeling. The model is useful for getting multilingual sentence embeddings and for bi-text retrieval.
- Model: HuggingFace's model hub.
- Paper: arXiv.
- Original model: TensorFlow Hub.
- Blog post: Google AI Blog.
Usage
Using the model:
import torch
from transformers import BertModel, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("setu4993/LaBSE")
model = BertModel.from_pretrained("setu4993/LaBSE")
model = model.eval()
english_sentences = [
"dog",
"Puppies are nice.",
"I enjoy taking long walks along the beach with my dog.",
]
english_inputs = tokenizer(english_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
english_outputs = model(**english_inputs)
To get the sentence embeddings, use the pooler output:
english_embeddings = english_outputs.pooler_output
Output for other languages:
italian_sentences = [
"cane",
"I cuccioli sono carini.",
"Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.",
]
japanese_sentences = ["犬", "子犬はいいです", "私は犬と一緒にビーチを散歩するのが好きです"]
italian_inputs = tokenizer(italian_sentences, return_tensors="pt", padding=True)
japanese_inputs = tokenizer(japanese_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
italian_outputs = model(**italian_inputs)
japanese_outputs = model(**japanese_inputs)
italian_embeddings = italian_outputs.pooler_output
japanese_embeddings = japanese_outputs.pooler_output
For similarity between sentences, an L2-norm is recommended before calculating the similarity:
import torch.nn.functional as F
def similarity(embeddings_1, embeddings_2):
normalized_embeddings_1 = F.normalize(embeddings_1, p=2)
normalized_embeddings_2 = F.normalize(embeddings_2, p=2)
return torch.matmul(
normalized_embeddings_1, normalized_embeddings_2.transpose(0, 1)
)
print(similarity(english_embeddings, italian_embeddings))
print(similarity(english_embeddings, japanese_embeddings))
print(similarity(italian_embeddings, japanese_embeddings))
Details
Details about data, training, evaluation and performance metrics are available in the original paper.
BibTeX entry and citation info
@misc{feng2020languageagnostic,
title={Language-agnostic BERT Sentence Embedding},
author={Fangxiaoyu Feng and Yinfei Yang and Daniel Cer and Naveen Arivazhagan and Wei Wang},
year={2020},
eprint={2007.01852},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
License
This repository and the conversion code is licensed under the MIT license, but the model is distributed with an Apache-2.0 license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for convert-labse-tf-pt-1.1.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6c80d2b0bf83e4cbe1229017101483713fd1cef8947683728788a2efbe971fe6 |
|
MD5 | dd702e5001badefa08032b76206c0405 |
|
BLAKE2b-256 | 57320a1be26eb3a11b1e1adf8dd78cffc637b750c56557c46ecd24e6ea6c8d8a |
Hashes for convert_labse_tf_pt-1.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 638e071c4cdb78c87713af838d5f96c627af949e0aac37831dc898672524f7d0 |
|
MD5 | 98f4da66a0efd75104bcea194ff591de |
|
BLAKE2b-256 | 3aaed4e23450980bb4b9aab444c076cae03edbb8fc7a6053320120ce40fe28a5 |