Fast and customizable text tokenization library with BPE and SentencePiece support
Project description
pyonmttok
pyonmttok is the Python wrapper for OpenNMT/Tokenizer, a fast and customizable text tokenization library with BPE and SentencePiece support.
Installation:
pip install pyonmttok
Requirements:
- OS: Linux, macOS, Windows
- Python version: >= 3.6
- pip version: >= 19.0
Table of contents
Tokenization
Example
>>> import pyonmtok
>>> tokenizer = pyonmttok.Tokenizer("aggressive", joiner_annotate=True)
>>> tokens, _ = tokenizer.tokenize("Hello World!")
>>> tokens
['Hello', 'World', '■!']
>>> tokenizer.detokenize(tokens)
'Hello World!'
Interface
Constructor
tokenizer = pyonmttok.Tokenizer(
mode: str,
*,
lang: Optional[str] = None,
bpe_model_path: Optional[str] = None,
bpe_dropout: float = 0,
vocabulary_path: Optional[str] = None,
vocabulary_threshold: int = 0,
sp_model_path: Optional[str] = None,
sp_nbest_size: int = 0,
sp_alpha: float = 0.1,
joiner: str = "■",
joiner_annotate: bool = False,
joiner_new: bool = False,
support_prior_joiners: bool = False,
spacer_annotate: bool = False,
spacer_new: bool = False,
case_feature: bool = False,
case_markup: bool = False,
soft_case_regions: bool = False,
no_substitution: bool = False,
with_separators: bool = False,
preserve_placeholders: bool = False,
preserve_segmented_tokens: bool = False,
segment_case: bool = False,
segment_numbers: bool = False,
segment_alphabet_change: bool = False,
segment_alphabet: Optional[List[str]] = None,
)
# SentencePiece-compatible tokenizer.
tokenizer = pyonmttok.SentencePieceTokenizer(
model_path: str,
vocabulary_path: Optional[str] = None,
vocabulary_threshold: int = 0,
nbest_size: int = 0,
alpha: float = 0.1,
)
# Copy constructor.
tokenizer = pyonmttok.Tokenizer(tokenizer: pyonmttok.Tokenizer)
# Return the tokenization options (excluding options related to subword).
tokenizer.options
See the documentation for a description of each tokenization option.
Tokenization
# By default, tokenize returns the tokens and features.
# When as_token_objects=True, the method returns Token objects (see below).
# When training=False, subword regularization such as BPE dropout is disabled.
tokenizer.tokenize(
text: str,
as_token_objects: bool = False,
training: bool = True,
) -> Union[Tuple[List[str], Optional[List[List[str]]]], List[pyonmttok.Token]]
# Batch version of tokenize method.
tokenizer.tokenize_batch(
batch_text: List[str],
as_token_objects: bool = False,
training: bool = True,
) -> Union[Tuple[List[List[str]], List[Optional[List[List[str]]]]], List[List[pyonmttok.Token]]]
# Tokenize a file.
tokenizer.tokenize_file(
input_path: str,
output_path: str,
num_threads: int = 1,
verbose: bool = False,
training: bool = True,
tokens_delimiter: str = " ",
)
Detokenization
# The detokenize method converts a list of tokens back to a string.
tokenizer.detokenize(
tokens: List[str],
features: Optional[List[List[str]]] = None,
) -> str
tokenizer.detokenize(tokens: List[pyonmttok.Token]) -> str
# The detokenize_with_ranges method also returns a dictionary mapping a token
# index to a range in the detokenized text.
# Set merge_ranges=True to merge consecutive ranges, e.g. subwords of the same
# token in case of subword tokenization.
# Set unicode_ranges=True to return ranges over Unicode characters instead of bytes.
tokenizer.detokenize_with_ranges(
tokens: Union[List[str], List[pyonmttok.Token]],
merge_ranges: bool = False,
unicode_ranges: bool = False,
) -> Tuple[str, Dict[int, Tuple[int, int]]]
# Detokenize a file.
tokenizer.detokenize_file(
input_path: str,
output_path: str,
tokens_delimiter: str = " ",
)
Subword learning
Example
The Python wrapper supports BPE and SentencePiece subword learning through a common interface:
1. Create the subword learner with the tokenization you want to apply, e.g.:
# BPE is trained and applied on the tokenization output before joiner (or spacer) annotations.
tokenizer = pyonmttok.Tokenizer("aggressive", joiner_annotate=True, segment_numbers=True)
learner = pyonmttok.BPELearner(tokenizer=tokenizer, symbols=32000)
# SentencePiece can learn from raw sentences so a tokenizer in not required.
learner = pyonmttok.SentencePieceLearner(vocab_size=32000, character_coverage=0.98)
2. Feed some raw data:
# Feed detokenized sentences:
learner.ingest("Hello world!")
learner.ingest("How are you?")
# or detokenized text files:
learner.ingest_file("/data/train1.en")
learner.ingest_file("/data/train2.en")
3. Start the learning process:
tokenizer = learner.learn("/data/model-32k")
The returned tokenizer
instance can be used to apply subword tokenization on new data.
Interface
# See https://github.com/rsennrich/subword-nmt/blob/master/subword_nmt/learn_bpe.py
# for argument documentation.
learner = pyonmttok.BPELearner(
tokenizer: Optional[pyonmttok.Tokenizer] = None, # Defaults to tokenization mode "space".
symbols: int = 10000,
min_frequency: int = 2,
total_symbols: bool = False,
)
# See https://github.com/google/sentencepiece/blob/master/src/spm_train_main.cc
# for available training options.
learner = pyonmttok.SentencePieceLearner(
tokenizer: Optional[pyonmttok.Tokenizer] = None, # Defaults to tokenization mode "none".
keep_vocab: bool = False, # Keep the generated vocabulary (model_path will act like model_prefix in spm_train)
**training_options,
)
learner.ingest(text: str)
learner.ingest_file(path: str)
learner.ingest_token(token: Union[str, pyonmttok.Token])
learner.learn(model_path: str, verbose: bool = False) -> pyonmttok.Tokenizer
Token API
The Token API allows to tokenize text into pyonmttok.Token
objects. This API can be useful to apply some logics at the token level but still retain enough information to write the tokenization on disk or detokenize.
Example
>>> tokenizer = pyonmttok.Tokenizer("aggressive", joiner_annotate=True)
>>> tokens = tokenizer.tokenize("Hello World!", as_token_objects=True)
>>> tokens
[Token('Hello'), Token('World'), Token('!', join_left=True)]
>>> tokens[-1].surface
'!'
>>> tokenizer.serialize_tokens(tokens)[0]
['Hello', 'World', '■!']
>>> tokens[-1].surface = '.'
>>> tokenizer.serialize_tokens(tokens)[0]
['Hello', 'World', '■.']
>>> tokenizer.detokenize(tokens)
'Hello World.'
Interface
The pyonmttok.Token
class has the following attributes:
surface
: a string, the token valuetype
: apyonmttok.TokenType
value, the type of the tokenjoin_left
: a boolean, whether the token should be joined to the token on the left or notjoin_right
: a boolean, whether the token should be joined to the token on the right or notpreserve
: a boolean, whether joiners and spacers can be attached to this token or notfeatures
: a list of string, the features attached to the tokenspacer
: a boolean, whether the token is prefixed by a SentencePiece spacer or not (only set when using SentencePiece)casing
: apyonmttok.Casing
value, the casing of the token (only set when tokenizing withcase_feature
orcase_markup
)
The pyonmttok.TokenType
enumeration is used to identify tokens that were split by a subword tokenization. The enumeration has the following values:
TokenType.WORD
TokenType.LEADING_SUBWORD
TokenType.TRAILING_SUBWORD
The pyonmttok.Casing
enumeration is used to identify the original casing of a token that was lowercased by the case_feature
or case_markup
tokenization options. The enumeration has the following values:
Casing.LOWERCASE
Casing.UPPERCASE
Casing.MIXED
Casing.CAPITALIZED
Casing.NONE
The Tokenizer
instances provide methods to serialize or deserialize Token
objects:
# Serialize Token objects to strings that can be saved on disk.
tokenizer.serialize_tokens(
tokens: List[pyonmttok.Token],
) -> Tuple[List[str], Optional[List[List[str]]]]
# Deserialize strings into Token objects.
tokenizer.deserialize_tokens(
tokens: List[str],
features: Optional[List[List[str]]] = None,
) -> List[pyonmttok.Token]
Utilities
Interface
# Returns True if the string has the placeholder format.
pyonmttok.is_placeholder(token: str)
# Sets the random seed for reproducible tokenization.
pyonmttok.set_random_seed(seed: int)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
Hashes for pyonmttok-1.29.0-cp310-cp310-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 34e9dc60c9b1865a604d14af9103b3be660c463a24057273d6b3e8fa5a59fdbb |
|
MD5 | 7cbaf95d941e32fda3f12c503c08f453 |
|
BLAKE2b-256 | 720a08e4c970abd30dc41cebe593abe0507bfa975f8c6fbefbb1d6d098038b01 |
Hashes for pyonmttok-1.29.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e5240bb03aa2144fe7b94eea56d9da79fdcc83bfac1f99732b67d9f5f58fff7f |
|
MD5 | b7580656c71cb89f311261b0ba0024f3 |
|
BLAKE2b-256 | b2b135528aa68be75117ab12ec2fdbbfafa221494d54e7c448372d3ec18bec1b |
Hashes for pyonmttok-1.29.0-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 51f73cc485e080b987e5bcb27081c1cd0561b55601bcc991ab7f12a48067db88 |
|
MD5 | 0e38bed883797fc96982f4fb824f6a2f |
|
BLAKE2b-256 | 4827cf759b8a9c7b7cc62de2524c7ecfbc5e9e10db25bf7b6be98e838d90a683 |
Hashes for pyonmttok-1.29.0-cp39-cp39-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6d289cd12a86217f00ec5909b8e4319743a0c78e4d071197e509a6c896f62f81 |
|
MD5 | 3f465c3d095e38190532ca5b09f82afe |
|
BLAKE2b-256 | 9b3c943ee657787515ea1904bc565044b20e7839d62b80c840241e4efa0c9309 |
Hashes for pyonmttok-1.29.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9c1c0a64b2d2d04b5d5fe48b8f6ea0ff579f1e3386e0bd308991b8d76a6f559f |
|
MD5 | f989b1f2c7184363aba4894b40a15543 |
|
BLAKE2b-256 | ca251120ca9676441a0cdb3bd17394bb5075125eef389d0297d47defc0e7c3e2 |
Hashes for pyonmttok-1.29.0-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 838b292a0af41514c627dfa7e385e2a06678019a01fcdd24b18f8df24a2e782a |
|
MD5 | 3f8cb8e7361cfac9cfd2aabe6c25637b |
|
BLAKE2b-256 | 1391b3624f269fc7eb25be8fe9a7a4bac70be3b9300b0a9f3c1bfb08eee96d11 |
Hashes for pyonmttok-1.29.0-cp38-cp38-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6ae21eba00133719d04e27968f8226830b89340baaf827cbb24aeabd06be60b3 |
|
MD5 | f776e1a07e76a4f89e4db759cf289e13 |
|
BLAKE2b-256 | 80153977781b3c981604f44558c51ea92152f54055ac18618f158b9f87988bc5 |
Hashes for pyonmttok-1.29.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 394bfeb427fa2d916e8125aeebc589cfaf14029121d15c8624c51f9afabfb030 |
|
MD5 | 753e1df8981601e3c2a58d26c179d90e |
|
BLAKE2b-256 | c40d62a99235012b522f1c6fb5bad4be6b4dd65f64bc14779886481ede70e96c |
Hashes for pyonmttok-1.29.0-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2dfc153aab56bc5b487522b99836cc489b35e20a77b6e217612e2d933f311795 |
|
MD5 | 4ad7c51ad13d65c21681c5de3d430387 |
|
BLAKE2b-256 | dc197b2668cf706a17184c6c7e744a94e51a9f193e5ef66914dd24a5f63e94a9 |
Hashes for pyonmttok-1.29.0-cp37-cp37m-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 27b524226d7b3549613ddd609fc7b7fa0f3b91bb65cf0cd622ba854aa9acc9e0 |
|
MD5 | c7b06cdb0d3b3b1f320fbbd2b4a2e373 |
|
BLAKE2b-256 | 2d84917fe1386fe493fbf776bd17396f601f565f6afd14a2c3b96ffe8a8dbe31 |
Hashes for pyonmttok-1.29.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 20aad32bbb0d8d5cc9565b81e14c49abea9916d901c53211c47de0a1ab877177 |
|
MD5 | 82b2ce8b51aba5aec9c021f20026fdad |
|
BLAKE2b-256 | 3a8089adcf6836be9f0bb597e2f8fff372eb8c4eef18ee71a5b56c3fd26102da |
Hashes for pyonmttok-1.29.0-cp37-cp37m-macosx_10_9_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 02ea4b146beca6abf7f9f847dfc297d0a19a9f994dcfb0119007676bab3be30e |
|
MD5 | 5496b9bd31ad33346b87fbefc588480d |
|
BLAKE2b-256 | 0900314c6dd8e963a49a06992837ec94842951386251ba45e9e780abb9e08187 |
Hashes for pyonmttok-1.29.0-cp36-cp36m-win_amd64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | cc9eb18e6ad14b3c8b12ebef432abbce43edfc79dc99e6e8d37e6a0d58f5a878 |
|
MD5 | 3a725228cebcc4589a215f5c68dc3a57 |
|
BLAKE2b-256 | 98367ff5b81fec517a6c13ebe8ae82cce467c04befc145413a52a0c6a65b37a3 |
Hashes for pyonmttok-1.29.0-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 17cdcda73a6d75ce7dd2c9699a2790048dcfefa6a45f2a21ed8fe92144aec89d |
|
MD5 | e38745a598db7bee9901430079d18515 |
|
BLAKE2b-256 | ed236d93b578fd053c9ed1b9a74246126c89e95517fece8a856d2da41252d6c4 |
Hashes for pyonmttok-1.29.0-cp36-cp36m-macosx_10_9_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5daac54bdb2772fa18bd76156c33fef991d1f52f8f912c775819a734c2cf516a |
|
MD5 | ef543de9b08c974ee683f9355da78fd9 |
|
BLAKE2b-256 | 594e9949b39b8ac1af2fda73102a48d08bc8468f2d793e08164aeaaed583eb89 |