Easy-to-use text representations extraction library based on the Transformers library.
This library is based on the Transformers library by HuggingFace. Using this library, you can quickly extract text representations from Transformer models. Only two lines of code are needed to initialize the required model and extract the text representations from it.
Table of contents
This repository is tested on Python 3.6.8 and PyTorch 1.2.0
First you need to install PyTorch. Please refer to PyTorch installation page regarding the specific install command for your platform.
When PyTorch has been installed, Simple Representation can be installed using pip as follows:
pip install simplerepresentation
Here also, you first need to install PyTorch. Please refer to PyTorch installation page regarding the specific install command for your platform.
When PyTorch has been installed, you can install from source by cloning the repository and running:
pip install .
The following example extracts the text representations from
BERT Base Uncased model for the sentences
Hello Transformers! and
It's very simple..
from simplerepresentations import RepresentationModel def load_data(): return ['Hello Transformers!', 'It\'s very simple.'] if __name__ == '__main__': model_type = 'bert' model_name = 'bert-base-uncased' representation_model = RepresentationModel( model_type=model_type, model_name=model_name, batch_size=32, max_seq_length=10, # truncate sentences to be less than or equal to 10 tokens combination_method='cat', # concatenate the last `last_hidden_to_use` hidden states last_hidden_to_use=4 # use the last 4 hidden states to build tokens representations ) text_a = load_data() all_sentences_representations, all_tokens_representations = representation_model(text_a=text_a) print(all_sentences_representations.shape) # (2, 768) => (number of sentences, hidden size) print(all_tokens_representations.shape) # (2, 10, 3072) => (number of sentences, number of tokens, hidden size)
You can change the code in
load_data function to load your own data from any source you want (e.g. a CSV file).
The default settings for
RepresentationModel class are given below:
batch_size (32): integer
The batch size will be used while extracting representations.
max_seq_length (128): integer
Maximum sequence length the model will support.
last_hidden_to_use (1): integer
The number of the last hidden states that will be used to build the representations.
combination_method ('sum'): string ('sum', 'cat')
The method that will be used to combine the
use_cuda (True): boolean
Whether to use
CUDA or not.
process_count (cpu_count() - 2 if cpu_count() > 2 else 1): integer
Number of CPU cores (processes) to use when converting examples to features. Default is (number of cores - 2) or 1 if (number of cores <= 2).
chunksize (500): integer
The number of chunks that the examples will be divided to when converting them to features.
Current Pretrained Models
You can find the complete list of the current pretrained models from Transformers library documentation.
None of this would have been possible without the hard work by the HuggingFace team in developing the Transformers library.
Also, a lot of ideas used in this repository inspired from the Simple Transformers library.
|Filename, size||File type||Python version||Upload date||Hashes|
|Filename, size simplerepresentations-0.0.1.tar.gz (7.0 kB)||File type Source||Python version None||Upload date||Hashes View hashes|
Hashes for simplerepresentations-0.0.1.tar.gz