Skip to main content

How to encode sentences in a high-dimensional vector space, a.k.a., sentence embedding.

Project description

A Generic Sentence Embedding Library

In natural language processing, we need to encode text data. In the past, we mostly use encoders such as one-hot, term frequency, or TF-IDF (normalized term frequency). There are many challenges with these techniques. In the recent years, the latest advancements give us opportunity to encode sentences or words in more meaningful formats. The word2vec technique and BERT language model are two important ones.

The sentence embedding is an important step of many NLP projects from sentiment analysis to summarization. We believe that a flexible sentence embedding library is needed to build prototypes fast. That is why we have initiated this project. In the early releases, you will have access to the standard encoders. We will add more curated techniques in the later releases. Hope you can use this library in your exciting NLP projects.

Library

The package requires the following libraries:

  • gensim
  • numpy
  • spacy
  • transformers
  • torch

The sent2vec package is developed to help you prototype faster. That is why it has many dependencies on other libraries.

Install

It can be installed using pip:

pip3 install sent2vec

Usage

If you want to use the the BERT language model (more specifically, distilbert-base-uncased) to encode sentences for downstream applications, you must use the code below. Now, you can compute distance among sentences by using their representational vectors. In the example, as expected, the distance between vectors[0] and vectors[1] is less than the distance between vectors[0] and vectors[2].

from sent2vec.vectorizer import Vectorizer

sentences = [
    "This is an awesome book to learn NLP.",
    "DistilBERT is an amazing NLP model.",
    "We can interchangeably use embedding, encoding, or vectorizing.",
]
vectorizer = Vectorizer()
vectors = vectorizer.bert(sentences)

dist_1 = cosine_distance(vectors[0], vectors[1])
dist_2 = cosine_distance(vectors[0], vectors[2])

print('dist_1: {}'.format(dist_1), 'dist_2: {}'.format(dist_2))
dist_1: 0.043, dist_2: 0.192

If you want to use a word2vec approach instead, you must first split sentences to lists of words using the sent2words method. In this stage, you can customized the list of stop-words by adding or removing to/from the default list. When you extract the most important words in sentences, you can compute the sentence embeddings using the w2v method. This method computes the average of vectors corresponding to the remaining words using the code bleow.

from sent2vec.vectorizer import Vectorizer

sentences = [
    "Alice is in the Wonderland.",
    "Alice is not in the Wonderland.",
]
vectorizer = Vectorizer()
words = vectorizer.sent2words(sentences, remove_stop_words=['not'], add_stop_words=[])
model_path = os.path.join(os.path.abspath(os.getcwd()), 'glove-wiki-gigaword-300')
vectors = vectorizer.w2v(words, model_path= model_path)

print(words)
[['alice', 'wonderland'], ['alice', 'not', 'wonderland']]

As seen above, you can use different word2ved model by sending its path to the w2v method. You can use a pre-trained model or a customized one.

And, that's pretty much it!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sent2vec-0.1.6.tar.gz (4.5 kB view details)

Uploaded Source

Built Distribution

sent2vec-0.1.6-py3-none-any.whl (4.9 kB view details)

Uploaded Python 3

File details

Details for the file sent2vec-0.1.6.tar.gz.

File metadata

  • Download URL: sent2vec-0.1.6.tar.gz
  • Upload date:
  • Size: 4.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.2.0.post20200714 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.8.3

File hashes

Hashes for sent2vec-0.1.6.tar.gz
Algorithm Hash digest
SHA256 b39f0c1c5728879f9475357a79ad0f98a37653352c838286dcf87e74c9af8c7f
MD5 c83c288c9d549f800a0221df2fd5ba1e
BLAKE2b-256 327d4a985c52c584ec6f5b0f625369ad4f17e36b3244f48adf75ff4077611b14

See more details on using hashes here.

File details

Details for the file sent2vec-0.1.6-py3-none-any.whl.

File metadata

  • Download URL: sent2vec-0.1.6-py3-none-any.whl
  • Upload date:
  • Size: 4.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.2.0.post20200714 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.8.3

File hashes

Hashes for sent2vec-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 8c05c72cd65c45f3916afdd41e6d8b7ec66e0eb282854ca2a4afdbc854f99406
MD5 58ebc40e8b0153ffb3a7c6a085f74376
BLAKE2b-256 0c3471db2a648a5268b8c100500cb6f97c85a516132efeca6a16f54ca4d0135f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page