Module for finetuning dfm base-models to sentence transformers
Project description
dfm-sentence-transformers
Code for training sentence transformers for the Danish Foundation Models project.
Training
Install the package from PyPI:
pip install dfm-sentence-transformers
You have to specify basic model and training parameters, as well as all the tasks/datasets the model should be trained on.
Here is an example of a config:
[model]
name="chcaa/dfm-sentence-encoder-small-v1"
base_model="chcaa/dfm-encoder-small-v1"
device="cpu"
[training]
epochs=5
warmup_steps=100
batch_size=120
[tasks]
[tasks.bornholmsk]
@tasks="multiple_negatives_ranking"
sentence1="da_bornholm"
sentence2="da"
[tasks.bornholmsk.dataset]
@loaders="load_dataset"
path="strombergnlp/bornholmsk_parallel"
Then you can train a sentence transformer by using the finetune
command.
python3 -m dfm_sentence_trf finetune training.cfg -o "model/"
You can push the finetuned model to HuggingFace Hub:
python3 -m dfm_sentence_trf push_to_hub training.cfg --model_path "model/"
Evaluation
You can evaluate trained models with the Scandinavian Embedding Benchmark.
pip install seb
python3 -m seb "model/" "da"
Tasks
You can add an arbitrary number of tasks to the model's config. All tasks must have a unique name but their name is ignored in the actual training procedure. Datasets of tasks with the same loss function are mixed together so that the model can learn them simultaneously in mixed batches. The package comes with three default tasks you can use for different objectives:
1. Multiple Negatives Ranking
If you have a parallel corpus of sentences (paraphrase, translation, etc.) use this task. Batches consist of positive sentence pairs, and negative samples are constructed by taking all non-matching pairs in a batch.
Parameters:
Param | Type | Description | Default |
---|---|---|---|
sentence1 | str | Name of the first sentence column in the dataset. | - |
sentence2 | str | Name of the second sentence column in the dataset. | - |
scale | float | Output of similarity function is multiplied by scale value. | 20.0 |
[tasks.faroese]
@tasks="multiple_negatives_ranking"
sentence1="fo"
sentence2="da"
[tasks.faroese.dataset]
@loaders="load_dataset"
path="strombergnlp/itu_faroese_danish"
2. Cosine Similarity
Good for STS datasets. Minimizes mean squared error of estimated and true sentence cosine similairites.
Parameters:
Param | Type | Description | Default |
---|---|---|---|
sentence1 | str | Name of the first sentence column in the dataset. | - |
sentence2 | str | Name of the second sentence column in the dataset. | - |
similarity | str | Name of the gold standard similarity column. | - |
[tasks.sts]
@tasks="cosine_similarity"
sentence1="sent1"
sentence2="sent1"
similarity="label"
[tasks.sts.dataset]
...
3. Softmax
Good for NLI datasets. Uses softmax classification loss based on concatenated embeddings and their difference. Beware that these tasks are never joined due to potentially different labeling schemes.
Parameters:
Param | Type | Description | Default |
---|---|---|---|
sentence1 | str | Name of the first sentence column in the dataset. | - |
sentence2 | str | Name of the second sentence column in the dataset. | - |
label | str | Name of the label column in the dataset. | - |
[tasks.nli]
@tasks="softmax"
sentence1="premise"
sentence2="hypothesis"
label="label"
[tasks.nli.dataset]
...
Datasets
Datasets for each task are loaded with :hugs: load_dataset()
function, but only the first argument, and a name are accepted.
You can use local or remote datasets, and they can be of any of the canonical file formats (JSON, JSONL, CSV, Parquet...).
...
[tasks.local.dataset]
@loaders="load_dataset"
path="local/dataset/file.jsonl"
...
[tasks.huggingface_hub.dataset]
@loaders="load_dataset"
path="username/dataset"
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for dfm_sentence_transformers-0.3.2.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6e5f810b7db287c00b1998c88da6a7c55232ea92856f9079c57ed28390e5f765 |
|
MD5 | 1258fc07c4631cb15c2288ccf32aa932 |
|
BLAKE2b-256 | 1f837b1d7dba5668e9fbd2eb115dfd2ad477d3a0dcb7e2b76a1d678adce4746f |
Hashes for dfm_sentence_transformers-0.3.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a983d926878048f56ddbc65835e3f3005e92d8226f92ed3d0e7b57d22a278e58 |
|
MD5 | 244596d2008b02feca2dba26ae182fcd |
|
BLAKE2b-256 | 068ba5a2a7ab3b1173ad45a497cbbfa985e6355c604c0e857c87bb058404263d |