Skip to main content

Package to calculate the similarity score of two sentences

Project description

Sentence Similarity

Package to calculate the similarity score between two sentences

Examples

Using Transformers

from sentence_similarity import sentence_similarity
sentence_a = "paris is a beautiful city"
sentence_b = "paris is a grogeous city"

Supported Models

You can access some of the official model through the sentence_similarity class. However, you can directly type the HuggingFace's model name such as bert-base-uncased or distilbert-base-uncased when instantiating a sentence_similarity.

See all the available models at huggingface.co/models.

model=sentence_similarity(model_name='distilbert-base-uncased',embedding_type='cls_token_embedding')

BERT is bidirectional, the [CLS] is encoded including all representative information of all tokens through the multi-layer encoding procedure. The representation of [CLS] is individual in different sentences. Set embedding_type to cls_token_embedding, To compute the similarity score between two sentences based on [CLS] token.

paper link (https://arxiv.org/pdf/1810.04805.pdf)

score=model.get_score(sentence_a,sentence_b,metric="cosine")
print(score)

Available metric are euclidean, manhattan, minkowski, cosine score.

Using Sentence Transformers

from sentence_similarity import sentence_similarity
sentence_a = "paris is a beautiful city"
sentence_b = "paris is a grogeous city"

Supported Models

You can access all the pretrained models of Sentence-Transformers

See all the available models at sbert/models.

model=sentence_similarity(model_name='distilbert-base-uncased',embedding_type='sentence_embedding')

Sentence-BERT (SBERT), a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. Set embedding_type to sentence_embedding (default embedding_type), To compute the similarity score between two sentences based on sbert.

paper link (https://arxiv.org/pdf/1908.10084.pdf)

score=model.get_score(sentence_a,sentence_b,metric="cosine")
print(score)

Available metric are euclidean, manhattan, minkowski, cosine score.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sentence_similarity-1.0.0.tar.gz (3.2 kB view hashes)

Uploaded Source

Built Distribution

sentence_similarity-1.0.0-py3-none-any.whl (5.0 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page