Unsupervised text tokenizer focused on computational efficiency
Project description
YouTokenToMe
YouTokenToMe is an unsupervised text tokenizer focused on computational efficiency. It currently implements fast Byte Pair Encoding (BPE) [Sennrich et al.]. Our implementation is much faster in training and tokenization than both fastBPE and SentencePiece. In some test cases, it is 90 times faster. Check out our benchmark results.
Key advantages:
- Multithreading for training and tokenization
- The algorithm has
O(N)
complexity, whereN
is the length of training data - Highly efficient implementation in C++
- Python wrapper and command-line interface
As well as in the algorithm from the original paper, ours does not consider tokens that cross word boundaries. Just like in SentencePiece, all space symbols were replaced by meta symbol "▁" (U+2581). It allows sequences of tokens to be converted back to text and for word boundaries to be restored.
For example, the phrase Blazingly fast tokenization!
can be tokenized into
['▁Bl', 'az', 'ingly', '▁fast', '▁token', 'ization', '!']
Installation
pip install youtokentome
Python interface
Example
Let's start with a self-contained example.
import random
import youtokentome as yttm
train_data_path = "train_data.txt"
model_path = "example.model"
# Generating random file with training data
# 10000 lines with 100 characters in each line
n_lines = 10000
n_characters = 100
with open(train_data_path, "w") as fout:
for _ in range(n_lines):
print("".join([random.choice("abcd ") for _ in range(n_characters)]), file=fout)
# Generating random text
test_text = "".join([random.choice("abcde ") for _ in range(100)])
# Training model
yttm.BPE.train(data=train_data_path, vocab_size=5000, model=model_path)
# Loading model
bpe = yttm.BPE(model=model_path)
# Two types of tokenization
print(bpe.encode([test_text], output_type=yttm.OutputType.ID))
print(bpe.encode([test_text], output_type=yttm.OutputType.SUBWORD))
Training model
youtokentome.BPE.train(data, model, vocab_size, coverage, n_threads=-1, pad_id=0, unk_id=1, bos_id=2, eos_id=3)
Trains BPE model and saves to file.
Args:
data
: string, path to file with training datamodel
: string, path to where the trained model will be savedvocab_size
: int, number of tokens in the final vocabularycoverage
: float, fraction of characters covered by the model. Must be in the range [0, 1]. A good value to use is about 0.9999.n_threads
: int, number of parallel threads used to run. If equal to -1 then minimum of the number of available threads and 8 will be used (see benchmark).pad_id
: int, reserved id for paddingunk_id
: int, reserved id for unknown symbolsbos_id
: int, reserved id for begin of sentence tokeneos_id
: int, reserved id for end of sentence token
Returns: Class youtokentome.BPE
with the loaded model.
Model loading
youtokentome.BPE(model, n_threads=-1)
Class constructor. Loads the trained model.
model
: string, path to the trained modeln_threads
: int, number of parallel threads used to run. If equal to -1, then the maximum number of threads available will be used.
Methods
Class youtokentome.BPE
has the following methods:
encode
encode(self, sentences, output_type=yttm.OutputType.ID, bos=False, eos=False, reverse=False)
Args:
sentences
: list of strings, sentences for tokenization.output_type
: enum, sentence can be tokenized to ids or subwords. UseOutputType.ID
for ids andOutputType.SUBWORD
for subwords.bos
: bool, if True then token “beginning of sentence” will be addedeos
: bool, if True then token “end of sentence” will be addedreverse
: bool, if True the output sequence of tokens will be reversed
Returns: If output_type
is equal to youtokentome.OutputType.ID
or youtokentome.OutputType.SUBWORD
then a list of lists of integers or list of lists of strings will be returned
respectively.
vocab
vocab(self)
Returns: A list vocab_size
strings. The i-th string in the list corresponds
to i-th subword.
vocab_size
vocab_size(self)
Returns: int. Size of vocabulary.
subword_to_id
subword_to_id(self, subword)
Args:
subword
: string.
Returns:
Integer from the range [0, vocab_size-1]. Id of subword or,
if there is no such subword in the vocabulary, unk_id
will be
returned.
id_to_subword
id_to_subword(self, id)
Args:
id
: int, must be in the range [0, vocab_size-1]
Returns: string. Subword from vocabulary by id.
decode
decode(self, ids)
Convert each id to subword and concatenate with space symbol.
Args:
ids
: list of lists of integers. All integers must be in the range [0, vocab_size-1]
Returns: List of strings.
Command line interface
Example
$ yttm bpe --data TRAINING_DATA_FILE --model OUTPUT_MODEL_FILE --vocab_size 2000
$ yttm encode --model OUTPUT_MODEL_FILE --output_type subword < TEST_DATA_FILE > ENCODED_DATA
Supported commands
YouTokenToMe
supports the following commands:
$ yttm --help
Usage: yttm [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
bpe Train BPE model.
decode Decode ids to text.
encode Encode text to ids or subwords.
vocab Print list of learned subwords.
Command bpe
allows you to train Byte Pair Encoding model based on a text file.
$ yttm bpe --help
Usage: yttm bpe [OPTIONS]
Train BPE model.
Options:
--data PATH Training data file path. [required]
--model PATH Output model file path. [required]
--vocab_size INTEGER Number of tokens in the final vocabulary. [required]
--coverage FLOAT Fraction of characters covered by the model. [default: 1.0]
--n_threads INTEGER Number of threads. [default: -1]
--pad_id INTEGER Padding token id. [default: 0]
--unk_id INTEGER Unknown token id. [default: 1]
--bos_id INTEGER 'Begin of sentence' token id. [default: 2]
--eos_id INTEGER 'End of sentence' token id. [default: 3]
--help Show this message and exit.
Apply BPE encoding for a corpus of sentences. Use stdin
for input and stdout
for output.
By default, encoding works in parallel using n_threads
threads. Number of threads is limited by
8 (see benchmark).
With the --stream
option, --n_threads
will be ignored and all sentences will be processed one by one.
Each sentence will be tokenized and written to the stdout
before the next sentence is read.
$ yttm encode --help
Usage: yttm encode [OPTIONS]
Encode text to ids or subwords.
Options:
--model PATH Path to file with learned model. [required]
--output_type TEXT 'id' or 'subword'. [required]
--n_threads INTEGER Number of threads. [default: -1]
--bos Add tab 'begin of sentence'.
--eos Add tab 'end of sentence'.
--reverse Reverse output sequence of tokens.
--stream Process each line before reading the next one.
--help Show this message and exit.
Print vocabulary. This can be useful for understanding the model.
$ yttm vocab --help
Usage: yttm vocab [OPTIONS]
Print list of learned subwords.
Options:
--model PATH Path to file with learned model. [required]
--verbose Add merging rules.
--help Show this message and exit.
Convert ids back to text. Use stdin
for input and stdout
for output.
$ yttm decode --help
Usage: yttm decode [OPTIONS]
Decode ids to text.
Options:
--model PATH Path to file with learned model. [required]
--help Show this message and exit.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Hashes for youtokentome-1.0.2-cp37-cp37m-manylinux1_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | aac7dcbf0abfcd521dd557b2449474e57f6e78715bae43724a543de7adcc9667 |
|
MD5 | 406dcfe620f87f0eca69ded492c51a46 |
|
BLAKE2b-256 | df5a3bdfaefd81ced83a7067b171af523eb9341dc6a55c38fae8683cfdef4293 |
Hashes for youtokentome-1.0.2-cp37-cp37m-macosx_10_6_intel.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 60aa48e1aea68599f92fd0a55923cf38788f54e6807172115d49c696df5208fc |
|
MD5 | ae2e5877571ea8600e2f48a35a0bb6d8 |
|
BLAKE2b-256 | e8743b3a3fb37858d0285209eab6b1156a0454d8a4785df955625e0f11b60f9b |
Hashes for youtokentome-1.0.2-cp36-cp36m-manylinux1_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7103d16d04be3d11fe5eb70cfb3373968e8d444fbf4c24db54cbe4395ca2ec5e |
|
MD5 | aceefadc16868581a241fddec98fb570 |
|
BLAKE2b-256 | a8f3a2e97726b6e884a3f4b3e67a93eded66e1c45b2c2b37ffd8b697c3ecacf9 |
Hashes for youtokentome-1.0.2-cp36-cp36m-macosx_10_6_intel.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9c7632c5bd5b775ddb73749f031e62d21fac8b8ec59af9102adddc040cd5cfc8 |
|
MD5 | 6941e3b5f17d1b83b3ca73bf2d7f1fef |
|
BLAKE2b-256 | 1f5299fcf9db1ad7b35de8ec9a739b26017cba2968b13bcbe9142e137844c79d |
Hashes for youtokentome-1.0.2-cp35-cp35m-manylinux1_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | fd3194e83e7b0f7695d8c8d5232f9fb8adc13af1f0afdfab8735f339b372cc45 |
|
MD5 | b58e30ab5947018b25844aec85e0189d |
|
BLAKE2b-256 | 767c15c0c9c15052beb15367a2b9ce998071f9923134ae051fe0256de0640a61 |
Hashes for youtokentome-1.0.2-cp35-cp35m-macosx_10_6_intel.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3b7ff7a3d8c79d89b5f0327aa8e241f3939a3afdf5e4e3eccaea5431a20dd0a3 |
|
MD5 | 9eb454530f53852c7ba223eb4f696b44 |
|
BLAKE2b-256 | a6718f91df7ff2b13274626af50dd7bbf737a183734fd6854ce93f20912e2432 |