Skip to main content

Embeddings, Retrieval, and Reranking

Project description

HF Models GitHub - License PyPI - Python Version PyPI - Package Version Docs - GitHub.io

Sentence Transformers: Embeddings, Retrieval, and Reranking

This framework provides an easy method to compute embeddings for accessing, using, and training state-of-the-art embedding and reranker models. It can be used to compute embeddings using Sentence Transformer models (quickstart), to calculate similarity scores using Cross-Encoder (a.k.a. reranker) models (quickstart) or to generate sparse embeddings using Sparse Encoder models (quickstart). This unlocks a wide range of applications, including semantic search, semantic textual similarity, and paraphrase mining.

A wide selection of over 15,000 pre-trained Sentence Transformers models are available for immediate use on 🤗 Hugging Face, including many of the state-of-the-art models from the Massive Text Embeddings Benchmark (MTEB) leaderboard. Additionally, it is easy to train or finetune your own embedding models, reranker models or sparse encoder models using Sentence Transformers, enabling you to create custom models for your specific use cases.

For the full documentation, see www.SBERT.net.

Installation

We recommend Python 3.10+, PyTorch 1.11.0+, and transformers v4.34.0+.

Install with pip

pip install -U sentence-transformers

Install with conda

conda install -c conda-forge sentence-transformers

Install from sources

Alternatively, you can also clone the latest version from the repository and install it directly from the source code:

pip install -e .

PyTorch with CUDA

If you want to use a GPU / CUDA, you must install PyTorch with the matching CUDA Version. Follow PyTorch - Get Started for further details how to install PyTorch.

Getting Started

See Quickstart in our documentation.

Embedding Models

First download a pretrained embedding a.k.a. Sentence Transformer model.

from sentence_transformers import SentenceTransformer

model = SentenceTransformer("sentence-transformers/all-MiniLM-L6-v2")

Then provide some texts to the model.

sentences = [
    "The weather is lovely today.",
    "It's so sunny outside!",
    "He drove to the stadium.",
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# => (3, 384)

And that's already it. We now have numpy arrays with the embeddings, one for each text. We can use these to compute similarities.

similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.6660, 0.1046],
#         [0.6660, 1.0000, 0.1411],
#         [0.1046, 0.1411, 1.0000]])

Reranker Models

First download a pretrained reranker a.k.a. Cross Encoder model.

from sentence_transformers import CrossEncoder

# 1. Load a pretrained CrossEncoder model
model = CrossEncoder("cross-encoder/ms-marco-MiniLM-L6-v2")

Then provide some texts to the model.

# The texts for which to predict similarity scores
query = "How many people live in Berlin?"
passages = [
    "Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.",
    "Berlin has a yearly total of about 135 million day visitors, making it one of the most-visited cities in the European Union.",
    "In 2013 around 600,000 Berliners were registered in one of the more than 2,300 sport and fitness clubs.",
]

# 2a. predict scores for pairs of texts
scores = model.predict([(query, passage) for passage in passages])
print(scores)
# => [8.607139 5.506266 6.352977]

And we're good to go. You can also use model.rank to avoid having to perform the reranking manually:

# 2b. Rank a list of passages for a query
ranks = model.rank(query, passages, return_documents=True)

print("Query:", query)
for rank in ranks:
    print(f"- #{rank['corpus_id']} ({rank['score']:.2f}): {rank['text']}")
"""
Query: How many people live in Berlin?
- #0 (8.61): Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.
- #2 (6.35): In 2013 around 600,000 Berliners were registered in one of the more than 2,300 sport and fitness clubs.
- #1 (5.51): Berlin has a yearly total of about 135 million day visitors, making it one of the most-visited cities in the European Union.
"""

Sparse Encoder Models

First download a pretrained sparse embedding a.k.a. Sparse Encoder model.

from sentence_transformers import SparseEncoder

# 1. Load a pretrained SparseEncoder model
model = SparseEncoder("naver/splade-cocondenser-ensembledistil")

# The sentences to encode
sentences = [
    "The weather is lovely today.",
    "It's so sunny outside!",
    "He drove to the stadium.",
]

# 2. Calculate sparse embeddings by calling model.encode()
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 30522] - sparse representation with vocabulary size dimensions

# 3. Calculate the embedding similarities
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[   35.629,     9.154,     0.098],
#         [    9.154,    27.478,     0.019],
#         [    0.098,     0.019,    29.553]])

# 4. Check sparsity stats
stats = SparseEncoder.sparsity(embeddings)
print(f"Sparsity: {stats['sparsity_ratio']:.2%}")
# Sparsity: 99.84%

Pre-Trained Models

We provide a large list of pretrained models for more than 100 languages. Some models are general purpose models, while others produce embeddings for specific use cases.

Training

This framework allows you to fine-tune your own sentence embedding methods, so that you get task-specific sentence embeddings. You have various options to choose from in order to get perfect sentence embeddings for your specific task.

Some highlights across the different types of training are:

  • Support of various transformer networks including BERT, RoBERTa, XLM-R, DistilBERT, Electra, BART, ...
  • Multi-Lingual and multi-task learning
  • Evaluation during training to find optimal model
  • 20+ loss functions for embedding models, 10+ loss functions for reranker models and 10+ loss functions for sparse embedding models, allowing you to tune models specifically for semantic search, paraphrase mining, semantic similarity comparison, clustering, triplet loss, contrastive loss, etc.

Application Examples

You can use this framework for:

and many more use-cases.

For all examples, see examples/sentence_transformer/applications.

Development setup

After cloning the repo (or a fork) to your machine, in a virtual environment, run:

python -m pip install -e ".[dev]"

pre-commit install

To test your changes, run:

pytest

Citing & Authors

If you find this repository helpful, feel free to cite our publication Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks:

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

If you use one of the multilingual models, feel free to cite our publication Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation:

@inproceedings{reimers-2020-multilingual-sentence-bert,
    title = "Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2020",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/2004.09813",
}

Please have a look at Publications for our different publications that are integrated into SentenceTransformers.

Maintainers

Maintainer: Tom Aarsen, 🤗 Hugging Face

Don't hesitate to open an issue if something is broken (and it shouldn't be) or if you have further questions.


This project was originally developed by the Ubiquitous Knowledge Processing (UKP) Lab at TU Darmstadt. We're grateful for their foundational work and continued contributions to the field.

This repository contains experimental software and is published for the sole purpose of giving additional background details on the respective publication.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sentence_transformers-5.4.1.tar.gz (428.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sentence_transformers-5.4.1-py3-none-any.whl (571.3 kB view details)

Uploaded Python 3

File details

Details for the file sentence_transformers-5.4.1.tar.gz.

File metadata

  • Download URL: sentence_transformers-5.4.1.tar.gz
  • Upload date:
  • Size: 428.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.6

File hashes

Hashes for sentence_transformers-5.4.1.tar.gz
Algorithm Hash digest
SHA256 436bcb1182a0ff42a8fb2b1c43498a70d0a75b688d182f2cd0d1dd115af61ddc
MD5 6672b48bff5ee0d5b3c0ece7496b7ded
BLAKE2b-256 4d687f98c221940ce783b492ad6140384daf2e2918cd7175009d6a362c22b9ee

See more details on using hashes here.

File details

Details for the file sentence_transformers-5.4.1-py3-none-any.whl.

File metadata

File hashes

Hashes for sentence_transformers-5.4.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a6d640fc363849b63affb8e140e9d328feabab86f83d58ac3e16b1c28140b790
MD5 56f6dedbd04706c08e84a05fb68ed7fd
BLAKE2b-256 c5d93a9b6f2ccdedc9dc00fe37b2fc58f58f8efbff44565cf4bf39d8568bb13a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page