Skip to main content

How to encode sentences in a high-dimensional vector space, a.k.a., sentence embedding.

Project description

A Generic Sentence Embedding Library

In natural language processing, we need to encode text data. In the past, we mostly use encoders such as one-hot, term frequency, or TF-IDF (normalized term frequency). There are many challenges with these techniques. In the recent years, the latest advancements give us opportunity to encode sentences or words in more meaningful formats. The word2vec technique and BERT language model are two important ones.

The sentence embedding is an important step of many NLP projects from sentiment analysis to summarization. We believe that a flexible sent2vec library is needed to build a prototype fast. That is why we have initiated this project. In the early releases, you will have access to the standard encoders. We will add more curated techniques in the later releases. Hope you can use this library in your exciting NLP projects.

Library

The package requires the following libraries:

  • gensim
  • numpy
  • spacy
  • transformers
  • torch

This package is developed to just make the prototyping become faster. That is why it has many dependencies on other libraries.

Install

It can be installed using pip:

pip3 install sent2vec

Usage

If you want to use the the BERT language model to compute sentence embedding, you must use the code below.

from sent2vec.vectorizer import Vectorizer

sentences = [
    "This is an awesome book to learn NLP.",
    "DistilBERT is an amazing NLP model.",
    "We can interchangeably use embedding, encoding, or vectorizing.",
]
vectorizer = Vectorizer()
vectors = vectorizer.bert(sentences)

Having the corresponding vectors, you can compute distance among vectors. Here, as expected, the distance between vectors[0] and vectors[1] is less than the distance between vectors[0] and vectors[2].

dist_1 = cosine_distance(vectors[0], vectors[1])
dist_2 = cosine_distance(vectors[0], vectors[2])

print('dist_1: {}'.format(dist_1), 'dist_2: {}'.format(dist_2))
dist_1: 0.043, dist_2: 0.192

If you want to use a word2vec approach instead, you must first split sentences to lists of words using the sent2words method. In this stage, you can customized the list of stop-words by adding or removing to/from the default list. When you extract the most important words in sentences, you can compute the sentence embeddings using the w2v method. This method computes the average of vectors corresponding to the remaining words using the code bleow.

sentences = [
    "Alice is in the Wonderland.",
    "Alice is not in the Wonderland.",
]
vectorizer = Vectorizer()
words = vectorizer.sent2words(sentences, remove_stop_words=['not'], add_stop_words=[])
vectors = vectorizer.w2v(words)

And, that's pretty much it!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sent2vec-0.1.2.tar.gz (4.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

sent2vec-0.1.2-py3-none-any.whl (4.6 kB view details)

Uploaded Python 3

File details

Details for the file sent2vec-0.1.2.tar.gz.

File metadata

  • Download URL: sent2vec-0.1.2.tar.gz
  • Upload date:
  • Size: 4.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.2.0.post20200714 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.8.3

File hashes

Hashes for sent2vec-0.1.2.tar.gz
Algorithm Hash digest
SHA256 815bec03907a42ca2886631895a7d93131163e8b579997cb6855d1a87339a830
MD5 e32030b1b357194bf1197ec8742d18cc
BLAKE2b-256 fa72e4685dedbd9677cc8e35e7d93b89e23b5b1df2169733f54e3f0ca5f49bbf

See more details on using hashes here.

File details

Details for the file sent2vec-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: sent2vec-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 4.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.5.0.1 requests/2.24.0 setuptools/49.2.0.post20200714 requests-toolbelt/0.9.1 tqdm/4.41.1 CPython/3.8.3

File hashes

Hashes for sent2vec-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f3306a53b51743f96b4c0c8992fb01e728460caed4701d4651531ba7c76158e7
MD5 e3f67f7ee0f1dcd3f1469d4d82fbcfa2
BLAKE2b-256 20ede7acc69e4d6764f497c6a7735bc3c640993d261a2e92417413a1c6e3c93f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page