Skip to main content

Handy library to work with pre-trained ELMo embeddings in TensorFlow

Project description

Simple_elmo is a Python library to work with pre-trained ELMo contextualized language models in TensorFlow.

This is a significantly updated wrapper to the original ELMo implementation. The main changes are:

  • more convenient and transparent data loading (including from compressed files)
  • code adapted to modern TensorFlow versions (including TensorFlow 2).

Installation

pip install --upgrade simple_elmo

Make sure to update the package regularly.

Usage

from simple_elmo import ElmoModel

model = ElmoModel()

Loading

First, let's load a pretrained model:

model.load(PATH_TO_ELMO)

Required arguments

PATH_TO_ELMO is either a ZIP archive downloaded from the NLPL vector repository, OR a directory containing 2 files:

  • *.hdf5, pre-trained ELMo weights in HDF5 format (simple_elmo assumes the file is named model.hdf5; if it is not found, the first existing file with the .hdf5 extension will be used);
  • options.json, description of the model architecture in JSON;

One can also provide a vocab.txt/vocab.txt.gz file in the same directory: a one-word-per-line vocabulary of words to be cached (as character id representations) before inference. Even if it is not present at all, ELMo will still process all words normally. However, providing the vocabulary file can slightly increase inference speed when working with very large corpora (by reducing the amount of word to char ids conversions).

Optional arguments

  • max_batch_size: integer, default 32;

    the maximum number of sentences/documents in a batch during inference; your input will be automatically split into chunks of the respective size; if your computational resources allow, you might want to increase this value.

  • limit: integer, default 100;

    the number of words from the vocabulary file to actually cache (counted from the first line). Increase the default value if you are sure these words occur in your data much more often than 1 or 2 times.

  • full: boolean, default False;

    if True, will try to load the full model from TensorFlow checkpoints, together with the vocabulary. Models loaded this way can be used for language modeling.

Working with models

Currently, we provide three methods for loaded models (will be expanded in the future):

  • model.get_elmo_vectors(SENTENCES)

  • model.get_elmo_vector_average(SENTENCES)

  • model.get_elmo_substitutes(RAW_SENTENCES)

SENTENCES is a list of input sentences (lists of words). RAW_SENTENCES is a list of input sentences as strings.

The get_elmo_vectors() method produces a tensor of contextualized word embeddings. Its shape is (number of sentences, the length of the longest sentence, ELMo dimensionality).

The get_elmo_vector_average() method produces a tensor with one vector per each input sentence, constructed by averaging individual contextualized word embeddings. Its shape is (number of sentences, ELMo dimensionality).

Both these methods can be used with the layers argument, which takes one of the three values:

  • average (default): return the average of all ELMo layers for each word;
  • top: return only the top (last) layer for each word;
  • all: return all ELMo layers for each word (an additional dimension appears in the produced tensor, with the shape equal to the number of layers in the model, 3 as a rule)

Use these tensors for your downstream tasks.

Another argument for these methods is session. It defaults to None which means a new TensorFlow session is created automatically when the method is called. This is convenient, since one does not have to worry about initializing the computational graph. However, in some cases, you might want to re-use an existing session (for example, to call the method multiple times without the initialization overhead).

For this to work, one must do all the initialization manually before the method is called, for example:

import tensorflow as tf
from simple_elmo import ElmoModel

graph = tf.Graph()
with graph.as_default() as elmo_graph:
    elmo_model = ElmoModel()
    elmo_model.load(PATH_TO_ELMO)
...
with elmo_graph.as_default() as current_graph:
    tf_session = tf.compat.v1.Session(graph=elmo_graph)
        with tf_session.as_default() as sess:
            elmo_model.elmo_sentence_input = simple_elmo.elmo.weight_layers("input", elmo_model.sentence_embeddings_op)
            sess.run(tf.compat.v1.global_variables_initializer())
...
elmo_model.get_elmo_vectors(SENTENCES, session=tf_session)
elmo_model.get_elmo_vectors(SENTENCES2, session=tf_session)
...

The get_elmo_substitutes() method currently works only with the models loaded with full=True. For each input sentence, it produces a list of lexical substitutes (LM predictions) for each word token in the sentence, produced by the forward and backward ELMo language models. The substitutes are yielded as dictionaries containing the vocabulary identifiers of the most probable LM predictions, their lexical forms and their logit scores. NB: this method is still experimental!

Example scripts

We provide three example scripts to make it easier to start using simple_elmo right away:

Inferring token embeddings

python3 get_elmo_vectors.py -i test.txt -e ~/PATH_TO_ELMO/

This script simply returns contextualized ELMo embeddings for the words in your input sentences.

Text pairs classification

python3 text_classification.py -i paraphrases_lemm.tsv.gz -e ~/PATH_TO_ELMO/

This script can be used to perform document pair classification (like in text entailment or paraphrase detection). Simple average of ELMo embeddings for all words in a document is used; then, the cosine similarity between two documents is calculated and used as a single classifier feature. Evaluated with macro F1 score and 10-fold cross-validation.

Example paraphrase dataset for English (adapted from MRPC):

Example paraphrase datasets for Russian (adapted from http://paraphraser.ru/):

Word sense disambiguation

python3 wsd_eval.py -i senseval3.tsv -e ~/PATH_TO_ELMO/

This script takes as an input a word sense disambiguation (WSD) dataset and a pre-trained ELMo model. It extracts token embeddings for ambiguous words and trains a simple Logistic Regression classifier to predict word senses. Averaged macro F1 score across all words in the test set is used as the evaluation measure (with 5-fold cross-validation).

Example WSD datasets for English (adapted from Senseval 3):

Example WSD datasets for Russian (adapted from RUSSE'18):

Frequently Asked Questions

Where can I find pre-trained ELMo models?

Several repositories are available where one can download ELMo models compatible with simple_elmo:

Can I load ELMoForManyLangs models?

Unfortunately not. These models are trained using a slightly different architecture. Therefore, they are not compatible neither with AllenNLP nor with simple_elmo. You should use the original ELMoForManyLangs code to work with these models.

I see a lot of warnings about deprecated methods

This is normal. The simple_elmo library is based on the original ELMo implementation which was aimed at the versions of TensorFlow which are very outdated today. We significantly updated the code and fixed many warnings - but not all of them yet. The work continues (and will eventually lead to a complete switch to TensorFlow 2).

Meaniwhile, these warnings can be ignored: they do not harm the resulting embeddings in any way.

Can I train my own ELMo with this library?

Currently we provide ELMo training code (updated and improved in the same way compared to the original implementation) in a separate repository. It will be integrated into the simple_elmo package at some point.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

simple_elmo-0.9.2.tar.gz (45.5 kB view details)

Uploaded Source

Built Distribution

simple_elmo-0.9.2-py3-none-any.whl (46.2 kB view details)

Uploaded Python 3

File details

Details for the file simple_elmo-0.9.2.tar.gz.

File metadata

  • Download URL: simple_elmo-0.9.2.tar.gz
  • Upload date:
  • Size: 45.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for simple_elmo-0.9.2.tar.gz
Algorithm Hash digest
SHA256 f077460df498db8d4c70de8d4d09529936acdf9580e16a679e6191882013022d
MD5 c1f9d1fec3256c52f53285e2700c79ee
BLAKE2b-256 11c39053831e91d9c51adf2fe377a694ecc0682c927c829adfa104b87a99d839

See more details on using hashes here.

File details

Details for the file simple_elmo-0.9.2-py3-none-any.whl.

File metadata

  • Download URL: simple_elmo-0.9.2-py3-none-any.whl
  • Upload date:
  • Size: 46.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.3

File hashes

Hashes for simple_elmo-0.9.2-py3-none-any.whl
Algorithm Hash digest
SHA256 f25956b755c4c05de446f7785ea11ad69d818ffad45308ff14d5a40eeef5d9a6
MD5 e073ae6c63f777bd691e3dffbd2b4efe
BLAKE2b-256 ed5ea6f4488c6af8bb06dfe6b118ca4b329b5391db93d1d303f0731a022fbdbe

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page