Skip to main content

Word level transformer based embeddings

Project description

Transformers Embedder

Open in Visual Studio Code PyTorch Transformers Code style: black

Upload to PyPi Upload to PyPi PyPi Version Anaconda-Server Badge DeepSource

A Word Level Transformer layer based on PyTorch and 🤗 Transformers.

How to use

Install the library from PyPI:

pip install transformers-embedder

or from Conda:

conda install -c riccorl transformers-embedder

It offers a PyTorch layer and a tokenizer that support almost every pretrained model from Huggingface 🤗Transformers library. Here is a quick example:

import transformers_embedder as tre

tokenizer = tre.Tokenizer("bert-base-cased")

model = tre.TransformersEmbedder(
    "bert-base-cased", return_words=True, layer_pooling_strategy="mean"
)

example = "This is a sample sentence"
inputs = tokenizer(example, return_tensors=True)
{
   'input_ids': tensor([[ 101, 1188, 1110,  170, 6876, 5650,  102]]),
   'attention_mask': tensor([[True, True, True, True, True, True, True]]),
   'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0]])
   'offsets': tensor([[0, 1, 2, 3, 4, 5, 6]]),
   'sentence_length': 7  # with special tokens included
}
outputs = model(**inputs)
# outputs.word_embeddings.shape[1:-1]       # remove [CLS] and [SEP]
torch.Size([1, 5, 768])
# len(example)
5

Info

One of the annoyance of using transformer-based models is that it is not trivial to compute word embeddings from the sub-token embeddings they output. With this API it's as easy as using 🤗Transformers to get word-level embeddings from theoretically every transformer model it supports.

Model

The TransformersEmbedder offer 2 ways to retrieve the embeddings:

  • return_words=True: computes the mean of the embeddings of the sub-tokens of each word
  • return_words=False: returns the raw output of the transformer model without sub-token pooling

There are also multiple type of outputs you can get using pooling_strategy parameter:

  • last: returns the last hidden state of the transformer model
  • concat: returns the concatenation of the selected output_layers of the transformer model
  • sum: returns the sum of the selected output_layers of the transformer model
  • mean: returns the average of the selected output_layers of the transformer model

If you also want all the outputs from the HuggingFace model, you can set return_all=True to get them.

class TransformersEmbedder(torch.nn.Module):
    def __init__(
        self,
        model: Union[str, tr.PreTrainedModel],
        return_words: bool = True,
        pooling_strategy: str = "last",
        output_layers: Tuple[int] = (-4, -3, -2, -1),
        fine_tune: bool = True,
        return_all: bool = True,
    )

Tokenizer

The Tokenizer class provides the tokenize method to preprocess the input for the TransformersEmbedder layer. You can pass raw sentences, pre-tokenized sentences and sentences in batch. It will preprocess them returning a dictionary with the inputs for the model. By passing return_tensors=True it will return the inputs as torch.Tensor.

By default, if you pass text (or batch) as strings, it uses the HuggingFace tokenizer to tokenize them.

text = "This is a sample sentence"
tokenizer(text)

text = ["This is a sample sentence", "This is another sample sentence"]
tokenizer(text)

You can pass a pre-tokenized sentence (or batch of sentences) by setting is_split_into_words=True

text = ["This", "is", "a", "sample", "sentence"]
tokenizer(text, is_split_into_words=True)

text = [
    ["This", "is", "a", "sample", "sentence", "1"],
    ["This", "is", "sample", "sentence", "2"],
]
tokenizer(text, is_split_into_words=True)

Examples

First, initialize the tokenizer

import transformers_embedder as tre

tokenizer = tre.Tokenizer("bert-base-cased")
  • You can pass a single sentence as a string:
text = "This is a sample sentence"
tokenizer(text)
{
  'input_ids': [101, 1188, 1110, 170, 6876, 5650, 102],
  'offsets': [(0, 0), (1, 1), (2, 2), (3, 3), (4, 4), (5, 5), (6, 6)],
  'attention_mask': [True, True, True, True, True, True, True],
  'token_type_ids': [0, 0, 0, 0, 0, 0, 0],
  'sentence_length': 7
}
  • A sentence pair
text = "This is a sample sentence A"
text_pair = "This is a sample sentence B"
tokenizer(text, text_pair)
{
  'input_ids': [101, 1188, 1110, 170, 6876, 5650, 138, 102, 1188, 1110, 170, 6876, 5650, 139, 102],
  'attention_mask': [True, True, True, True, True, True, True, True, True, True, True, True, True, True, True],
  'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1],
  'offsets': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14],
  'sentence_length': 15
}
  • A batch of sentences or sentence pairs. Using padding=True and return_tensors=True, the tokenizer returns the text ready for the model
batch = [
    ["This", "is", "a", "sample", "sentence", "1"],
    ["This", "is", "sample", "sentence", "2"],
    ["This", "is", "a", "sample", "sentence", "3"],
    # ...
    ["This", "is", "a", "sample", "sentence", "n", "for", "batch"],
]
tokenizer(batch, padding=True, return_tensors=True)

batch_pair = [
    ["This", "is", "a", "sample", "sentence", "pair", "1"],
    ["This", "is", "sample", "sentence", "pair", "2"],
    ["This", "is", "a", "sample", "sentence", "pair", "3"],
    # ...
    ["This", "is", "a", "sample", "sentence", "pair", "n", "for", "batch"],
]
tokenizer(batch, batch_pair, padding=True, return_tensors=True)

Custom fields

It is possible to add custom fields to the model input and tell the tokenizer how to pad them using add_padding_ops. Start by initializing the tokenizer with the model name:

import transformers_embedder as tre

tokenizer = tre.Tokenizer("bert-base-cased")

Then add the custom fields to it:

custom_fields = {
  "custom_filed_1": [
    [0, 0, 0, 0, 1, 0, 0],
    [0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0]
  ]
}

Now we can add the padding logic for our custom field custom_filed_1. add_padding_ops method takes in input

  • key: name of the field in the tokenizer input
  • value: value to use for padding
  • length: length to pad. It can be an int, or two string value, subword in which the element is padded to match the length of the subwords, and word where the element is padded relative to the length of the batch after the merge of the subwords.
tokenizer.add_padding_ops("custom_filed_1", 0, "word")

Finally, we can tokenize the input with the custom field:

text = [
    "This is a sample sentence",
    "This is another example sentence just make it longer, with a comma too!"
]

inputs = tokenizer(text, padding=True, return_tensors=True, additional_inputs=custom_fields)

The inputs are ready for the model, including the custom filed.

>>> inputs

{
   "input_ids": tensor(
       [
           [101, 1188, 1110, 170, 6876, 5650, 102, 0, 0, 0, 0],
           [101, 1188, 1110, 1330, 1859, 5650, 1198, 1294, 1122, 2039, 102],
       ]
   ),
   "attention_mask": tensor(
       [
           [True, True, True, True, True, True, True, False, False, False, False],
           [True, True, True, True, True, True, True, True, True, True, True],
       ]
   ),
   "word_mask": tensor(
       [
           [True, True, True, True, True, True, True, False, False, False, False],
           [True, True, True, True, True, True, True, True, True, True, True],
       ]
   ),
   "token_type_ids": tensor(
       [[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]
   ),
   "offsets": tensor(
       [
           [0, 1, 2, 3, 4, 5, 6, 7, 10, 10, 10],
           [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
       ]
   ),
   "sentence_length": tensor([7, 11]),
   "custom_filed_1": tensor(
       [[0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0], [0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0]]
   ),
}

Acknowledgements

Some code in the TransformersEmbedder class is taken from the PyTorch Scatter library. The pretrained models and the core of the tokenizer is from 🤗 Transformers.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

transformers_embedder-2.0.0b2.tar.gz (15.9 kB view details)

Uploaded Source

Built Distribution

transformers_embedder-2.0.0b2-py3-none-any.whl (14.2 kB view details)

Uploaded Python 3

File details

Details for the file transformers_embedder-2.0.0b2.tar.gz.

File metadata

  • Download URL: transformers_embedder-2.0.0b2.tar.gz
  • Upload date:
  • Size: 15.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/33.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.63.0 importlib-metadata/4.11.2 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.10

File hashes

Hashes for transformers_embedder-2.0.0b2.tar.gz
Algorithm Hash digest
SHA256 b6334399599e1e302f3f17be5b3424470abdb3eb2f21a9ed49414879ba6ab58e
MD5 4dc2ccd8df2f173031ebe5744caa9736
BLAKE2b-256 1258eddf1bd7d3ef8eb21d9eead2614aaa66a9e70c2801f36930107da2451d21

See more details on using hashes here.

File details

Details for the file transformers_embedder-2.0.0b2-py3-none-any.whl.

File metadata

  • Download URL: transformers_embedder-2.0.0b2-py3-none-any.whl
  • Upload date:
  • Size: 14.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/33.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.8 tqdm/4.63.0 importlib-metadata/4.11.2 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.9.10

File hashes

Hashes for transformers_embedder-2.0.0b2-py3-none-any.whl
Algorithm Hash digest
SHA256 4ebbef0601687c4d34629b81d0f769f792ce84eed90418fe0655889d3dbb5e3e
MD5 0485c4f90210a817966fb04bf7dacaff
BLAKE2b-256 dfac1f8c8d6143e646f3f55d24eb1b18d792f6eb5ab92ef72cad7d51c4e528b3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page