Convert LaBSE model from TensorFlow to PyTorch.
Project description
LaBSE
Project
This project is an implementation to convert LaBSE from TensorFlow to PyTorch.
Model description
Language-agnostic BERT Sentence Encoder (LaBSE) is a BERT-based model trained for sentence embedding for 109 languages. The pre-training process combines masked language modeling with translation language modeling. The model is useful for getting multilingual sentence embeddings and for bi-text retrieval.
- Model: HuggingFace's model hub.
- Paper: arXiv.
- Original model: TensorFlow Hub.
- Blog post: Google AI Blog.
Usage
Using the model:
import torch
from transformers import BertModel, BertTokenizerFast
tokenizer = BertTokenizerFast.from_pretrained("setu4993/LaBSE")
model = BertModel.from_pretrained("setu4993/LaBSE")
model = model.eval()
english_sentences = [
"dog",
"Puppies are nice.",
"I enjoy taking long walks along the beach with my dog.",
]
english_inputs = tokenizer(english_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
english_outputs = model(**english_inputs)
To get the sentence embeddings, use the pooler output:
english_embeddings = english_outputs.pooler_output
Output for other languages:
italian_sentences = [
"cane",
"I cuccioli sono carini.",
"Mi piace fare lunghe passeggiate lungo la spiaggia con il mio cane.",
]
japanese_sentences = ["犬", "子犬はいいです", "私は犬と一緒にビーチを散歩するのが好きです"]
italian_inputs = tokenizer(italian_sentences, return_tensors="pt", padding=True)
japanese_inputs = tokenizer(japanese_sentences, return_tensors="pt", padding=True)
with torch.no_grad():
italian_outputs = model(**italian_inputs)
japanese_outputs = model(**japanese_inputs)
italian_embeddings = italian_outputs.pooler_output
japanese_embeddings = japanese_outputs.pooler_output
For similarity between sentences, an L2-norm is recommended before calculating the similarity:
import torch.nn.functional as F
def similarity(embeddings_1, embeddings_2):
normalized_embeddings_1 = F.normalize(embeddings_1, p=2)
normalized_embeddings_2 = F.normalize(embeddings_2, p=2)
return torch.matmul(
normalized_embeddings_1, normalized_embeddings_2.transpose(0, 1)
)
print(similarity(english_embeddings, italian_embeddings))
print(similarity(english_embeddings, japanese_embeddings))
print(similarity(italian_embeddings, japanese_embeddings))
Details
Details about data, training, evaluation and performance metrics are available in the original paper.
BibTeX entry and citation info
@misc{feng2020languageagnostic,
title={Language-agnostic BERT Sentence Embedding},
author={Fangxiaoyu Feng and Yinfei Yang and Daniel Cer and Naveen Arivazhagan and Wei Wang},
year={2020},
eprint={2007.01852},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
License
This repository and the conversion code is licensed under the MIT license, but the model is distributed with an Apache-2.0 license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for convert-labse-tf-pt-1.0.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | d757efee356cc58b7ba08c569306e9360ff120dd9d8bad7401da44f179b996c2 |
|
MD5 | 20943d23866a6280eed62102f3d0c28e |
|
BLAKE2b-256 | c1b5dfc731cc183c303f8ad4c350ec387f130d763c48ab8c7090404b234314db |
Hashes for convert_labse_tf_pt-1.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 812d2b7cae8b45a18b8cc2fc3ecde405661fb46dc30527525c799f1dcbf47c0e |
|
MD5 | a88511ce84c8fdb9ee23717cc5076486 |
|
BLAKE2b-256 | f51fbad6e7b4e0882578f684baf3560d97976684ee30065d37714034b4ae60d4 |