AnglE-optimize Text Embeddings
Project description
AnglE📐: Angle-optimized Text Embeddings
It is Angle 📐, not Angel 👼.
🔥 A New SOTA for Semantic Textual Similarity!
📊 Click to show main results of AnglE
🤗 Pretrained Models
🤗 HF | Backbone | LLM | Language | Use Prompt | Datasets | Pooling Strategy | Avg Score. |
---|---|---|---|---|---|---|---|
SeanLee97/angle-llama-7b-nli-v2 | NousResearch/Llama-2-7b-hf | Y | EN | Y | multi_nli + snli | last token | 85.96 |
SeanLee97/angle-llama-7b-nli-20231027 | NousResearch/Llama-2-7b-hf | Y | EN | Y | multi_nli + snli | last token | 85.90 |
SeanLee97/angle-bert-base-uncased-nli-en-v1 | bert-base-uncased | N | EN | N | multi_nli + snli | cls_avg |
82.37 |
💬 The model above was trained using BERT's hyperparameters. Currently, We are working on searching for even better hyperparameters for Angle-LLaMA. We plan to release more advanced pre-trained models that will further enhance performance. Stay tuned ;)😉
📝 Training Details:
1) SeanLee97/angle-llama-7b-nli-20231027
We fine-tuned AnglE-LLaMA using 4 RTX 3090 Ti (24GB), the training script is as follows:
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --nproc_per_node=4 --master_port=1234 train_angle.py \
--task NLI-STS --save_dir ckpts/NLI-STS-angle-llama-7b \
--w2 35 --learning_rate 2e-4 --maxlen 45 \
--lora_r 32 --lora_alpha 32 --lora_dropout 0.1 \
--save_steps 200 --batch_size 160 --seed 42 --do_eval 0 --load_kbit 4 --gradient_accumulation_steps 4 --epochs 1
The evaluation script is as follows:
CUDA_VISIBLE_DEVICES=0,1 python eval.py \
--load_kbit 16 \
--model_name_or_path NousResearch/Llama-2-7b-hf \
--lora_weight SeanLee97/angle-llama-7b-nli-20231027
Results
English STS Results
Model | STS12 | STS13 | STS14 | STS15 | STS16 | STSBenchmark | SICKRelatedness | Avg. |
---|---|---|---|---|---|---|---|---|
SeanLee97/angle-llama-7b-nli-20231027 | 78.68 | 90.58 | 85.49 | 89.56 | 86.91 | 88.92 | 81.18 | 85.90 |
SeanLee97/angle-llama-7b-nli-v2 | 79.00 | 90.56 | 85.79 | 89.43 | 87.00 | 88.97 | 80.94 | 85.96 |
SeanLee97/angle-bert-base-uncased-nli-en-v1 | 75.09 | 85.56 | 80.66 | 86.44 | 82.47 | 85.16 | 81.23 | 82.37 |
Usage
AnglE supports two APIs, one is the transformers
API, the other is the AnglE
API. If you want to use the AnglE
API, please install AnglE first:
python -m pip install -U angle-emb
Angle-LLaMA
- AnglE
from angle_emb import AnglE
angle = AnglE.from_pretrained('NousResearch/Llama-2-7b-hf', pretrained_lora_path='SeanLee97/angle-llama-7b-nli-v2')
angle.set_prompt()
print('prompt:', angle.prompt)
vec = angle.encode({'text': 'hello world'}, to_numpy=True)
print(vec)
vecs = angle.encode([{'text': 'hello world1'}, {'text': 'hello world2'}], to_numpy=True)
print(vecs)
- transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel, PeftConfig
peft_model_id = 'SeanLee97/angle-llama-7b-nli-v2'
config = PeftConfig.from_pretrained(peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path).bfloat16().cuda()
model = PeftModel.from_pretrained(model, peft_model_id).cuda()
def decorate_text(text: str):
return f'Summarize sentence "{text}" in one word:"'
inputs = 'hello world!'
tok = tokenizer([decorate_text(inputs)], return_tensors='pt')
for k, v in tok.items():
tok[k] = v.cuda()
vec = model(output_hidden_states=True, **tok).hidden_states[-1][:, -1].float().detach().cpu().numpy()
print(vec)
Angle-BERT
- AnglE
from angle_emb import AnglE
angle = AnglE.from_pretrained('SeanLee97/angle-bert-base-uncased-nli-en-v1', pooling_strategy='cls_avg').cuda()
vec = angle.encode('hello world', to_numpy=True)
print(vec)
vecs = angle.encode(['hello world1', 'hello world2'], to_numpy=True)
print(vecs)
- transformers
import torch
from transformers import AutoModel, AutoTokenizer
model_id = 'SeanLee97/angle-bert-base-uncased-nli-en-v1'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id).cuda()
inputs = 'hello world!'
tok = tokenizer([inputs], return_tensors='pt')
for k, v in tok.items():
tok[k] = v.cuda()
hidden_state = model(**tok).last_hidden_state
vec = (hidden_state[:, 0] + torch.mean(hidden_state, dim=1)) / 2.0
print(vec)
Train Custom AnglE Model
1. Train NLI
-
Prepare your gpu environment
-
Install python dependencies
python -m pip install -r requirements.txt
- Download data
- Download multi_nli + snli:
$ cd data
$ sh download_data.sh
- Download sts datasets
$ cd SentEval/data/downstream
$ bash download_dataset.sh
2. Train w/ train_angle.py
The training interface is still messy, we are working on making it better. Currently you can modify train_angle.py
to train your own models.
3. Custom Train
Coming soon!
Citation
You are welcome to use our code and pre-trained models. If you use our code and pre-trained models, please support us by citing our work as follows:
@article{li2023angle,
title={AnglE-Optimized Text Embeddings},
author={Li, Xianming and Li, Jing},
journal={arXiv preprint arXiv:2309.12871},
year={2023}
}
When using our pre-trained LLM-based models and using xxx in one word:
prompt, it is recommended to cite the following work in addition to the above citation:
@article{jiang2023scaling,
title={Scaling Sentence Embeddings with Large Language Models},
author={Jiang, Ting and Huang, Shaohan and Luan, Zhongzhi and Wang, Deqing and Zhuang, Fuzhen},
journal={arXiv preprint arXiv:2307.16645},
year={2023}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for angle_emb-0.1.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9a110c909d9cb00f90776d87d2a88174b9e153df82251226f2c043d668a7d228 |
|
MD5 | ea4484287adb2a9400ab85d7cc50c144 |
|
BLAKE2b-256 | cbb6c1049712dd33a61d722f1c85d093056c04af7233b6828528379e9a33809d |