Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
Project description
wechsel
Code for WECHSEL: Effective initialization of subword embeddings for cross-lingual transfer of monolingual language models.
Example usage
Transferring English roberta-base
to Swahili:
import torch
from transformers import AutoModel, AutoTokenizer
from datasets import load_dataset
from wechsel import WECHSEL, load_embeddings
source_tokenizer = AutoTokenizer.from_pretrained("roberta-base")
model = AutoModel.from_pretrained("roberta-base")
target_tokenizer = source_tokenizer.train_new_from_iterator(
load_dataset("oscar", "unshuffled_deduplicated_sw", split="train")["text"],
vocab_size=len(source_tokenizer)
)
wechsel = WECHSEL(
load_embeddings("en"),
load_embeddings("sw"),
bilingual_dictionary="swahili"
)
target_embeddings, info = wechsel.apply(
source_tokenizer,
target_tokenizer,
model.get_input_embeddings().weight.detach().numpy(),
)
model.get_input_embeddings().weight.data = torch.from_numpy(target_embeddings)
# use `model` and `target_tokenizer` to continue training in Swahili!
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
wechsel-0.0.1.tar.gz
(2.4 kB
view hashes)