Skip to main content

Fast and customizable text tokenization library with BPE and SentencePiece support

Project description

pyonmttok

pyonmttok is the Python wrapper for OpenNMT/Tokenizer, a fast and customizable text tokenization library with BPE and SentencePiece support.

Installation:

pip install pyonmttok

Requirements:

  • OS: Linux, macOS, Windows
  • Python version: >= 3.6
  • pip version: >= 19.0

Table of contents

  1. Tokenization
  2. Subword learning
  3. Vocabulary
  4. Token API
  5. Utilities

Tokenization

Example

>>> import pyonmtok
>>> tokenizer = pyonmttok.Tokenizer("aggressive", joiner_annotate=True)
>>> tokens = tokenizer("Hello World!")
>>> tokens
['Hello', 'World', '■!']
>>> tokenizer.detokenize(tokens)
'Hello World!'

Interface

Constructor

tokenizer = pyonmttok.Tokenizer(
    mode: str,
    *,
    lang: Optional[str] = None,
    bpe_model_path: Optional[str] = None,
    bpe_dropout: float = 0,
    vocabulary_path: Optional[str] = None,
    vocabulary_threshold: int = 0,
    sp_model_path: Optional[str] = None,
    sp_nbest_size: int = 0,
    sp_alpha: float = 0.1,
    joiner: str = "■",
    joiner_annotate: bool = False,
    joiner_new: bool = False,
    support_prior_joiners: bool = False,
    spacer_annotate: bool = False,
    spacer_new: bool = False,
    case_feature: bool = False,
    case_markup: bool = False,
    soft_case_regions: bool = False,
    no_substitution: bool = False,
    with_separators: bool = False,
    preserve_placeholders: bool = False,
    preserve_segmented_tokens: bool = False,
    segment_case: bool = False,
    segment_numbers: bool = False,
    segment_alphabet_change: bool = False,
    segment_alphabet: Optional[List[str]] = None,
)

# SentencePiece-compatible tokenizer.
tokenizer = pyonmttok.SentencePieceTokenizer(
    model_path: str,
    vocabulary_path: Optional[str] = None,
    vocabulary_threshold: int = 0,
    nbest_size: int = 0,
    alpha: float = 0.1,
)

# Copy constructor.
tokenizer = pyonmttok.Tokenizer(tokenizer: pyonmttok.Tokenizer)

# Return the tokenization options (excluding options related to subword).
tokenizer.options

See the documentation for a description of each tokenization option.

Tokenization

# Tokenize a text.
# When training=False, subword regularization such as BPE dropout is disabled.
tokenizer.__call__(text: str, training: bool = True) -> List[str]

# Tokenize a text and return optional features.
# When as_token_objects=True, the method returns Token objects (see below).
tokenizer.tokenize(
    text: str,
    as_token_objects: bool = False,
    training: bool = True,
) -> Union[Tuple[List[str], Optional[List[List[str]]]], List[pyonmttok.Token]]

# Tokenize a batch of text.
tokenizer.tokenize_batch(
    batch_text: List[str],
    as_token_objects: bool = False,
    training: bool = True,
) -> Union[Tuple[List[List[str]], List[Optional[List[List[str]]]]], List[List[pyonmttok.Token]]]

# Tokenize a file.
tokenizer.tokenize_file(
    input_path: str,
    output_path: str,
    num_threads: int = 1,
    verbose: bool = False,
    training: bool = True,
    tokens_delimiter: str = " ",
)

Detokenization

# The detokenize method converts a list of tokens back to a string.
tokenizer.detokenize(
    tokens: List[str],
    features: Optional[List[List[str]]] = None,
) -> str
tokenizer.detokenize(tokens: List[pyonmttok.Token]) -> str

# The detokenize_with_ranges method also returns a dictionary mapping a token
# index to a range in the detokenized text.
# Set merge_ranges=True to merge consecutive ranges, e.g. subwords of the same
# token in case of subword tokenization.
# Set unicode_ranges=True to return ranges over Unicode characters instead of bytes.
tokenizer.detokenize_with_ranges(
    tokens: Union[List[str], List[pyonmttok.Token]],
    merge_ranges: bool = False,
    unicode_ranges: bool = False,
) -> Tuple[str, Dict[int, Tuple[int, int]]]

# Detokenize a file.
tokenizer.detokenize_file(
    input_path: str,
    output_path: str,
    tokens_delimiter: str = " ",
)

Subword learning

Example

The Python wrapper supports BPE and SentencePiece subword learning through a common interface:

1. Create the subword learner with the tokenization you want to apply, e.g.:

# BPE is trained and applied on the tokenization output before joiner (or spacer) annotations.
tokenizer = pyonmttok.Tokenizer("aggressive", joiner_annotate=True, segment_numbers=True)
learner = pyonmttok.BPELearner(tokenizer=tokenizer, symbols=32000)

# SentencePiece can learn from raw sentences so a tokenizer in not required.
learner = pyonmttok.SentencePieceLearner(vocab_size=32000, character_coverage=0.98)

2. Feed some raw data:

# Feed detokenized sentences:
learner.ingest("Hello world!")
learner.ingest("How are you?")

# or detokenized text files:
learner.ingest_file("/data/train1.en")
learner.ingest_file("/data/train2.en")

3. Start the learning process:

tokenizer = learner.learn("/data/model-32k")

The returned tokenizer instance can be used to apply subword tokenization on new data.

Interface

# See https://github.com/rsennrich/subword-nmt/blob/master/subword_nmt/learn_bpe.py
# for argument documentation.
learner = pyonmttok.BPELearner(
    tokenizer: Optional[pyonmttok.Tokenizer] = None,  # Defaults to tokenization mode "space".
    symbols: int = 10000,
    min_frequency: int = 2,
    total_symbols: bool = False,
)

# See https://github.com/google/sentencepiece/blob/master/src/spm_train_main.cc
# for available training options.
learner = pyonmttok.SentencePieceLearner(
    tokenizer: Optional[pyonmttok.Tokenizer] = None,  # Defaults to tokenization mode "none".
    keep_vocab: bool = False,  # Keep the generated vocabulary (model_path will act like model_prefix in spm_train)
    **training_options,
)

learner.ingest(text: str)
learner.ingest_file(path: str)
learner.ingest_token(token: Union[str, pyonmttok.Token])

learner.learn(model_path: str, verbose: bool = False) -> pyonmttok.Tokenizer

Vocabulary

Example

tokenizer = pyonmttok.Tokenizer("aggressive", joiner_annotate=True)

with open("train.txt") as train_file:
    vocab = pyonmttok.build_vocab_from_lines(
        train_file,
        tokenizer=tokenizer,
        maximum_size=32000,
        special_tokens=["<blank>", "<unk>", "<s>", "</s>"],
    )

with open("vocab.txt", "w") as vocab_file:
    for token in vocab.ids_to_tokens:
        vocab_file.write("%s\n" % token)

Interface

# Special tokens are added with ids 0, 1, etc., and are never removed by a resize.
vocab = pyonmttok.Vocab(special_tokens: Optional[List[str]] = None)

# Read-only properties.
vocab.tokens_to_ids -> Dict[str, int]
vocab.ids_to_tokens -> List[str]
vocab.counters -> List[int]

# Get or set the ID returned for out-of-vocabulary tokens.
# By default, it is the ID of the token <unk> if present in the vocabulary, len(vocab) otherwise.
vocab.default_id -> int

vocab.lookup_token(token: str) -> int
vocab.lookup_index(index: int) -> str

# Calls lookup_token on a batch of tokens.
vocab.__call__(tokens: List[str]) -> List[int]

vocab.__len__() -> int                  # Implements: len(vocab)
vocab.__contains__(token: str) -> bool  # Implements: "hello" in vocab
vocab.__getitem__(token: str) -> int    # Implements: vocab["hello"]

# Add tokens to the vocabulary after tokenization.
# If a tokenizer is not set, the text is split on spaces.
vocab.add_from_text(text: str, tokenizer: Optional[pyonmttok.Tokenizer] = None) -> None
vocab.add_from_file(path: str, tokenizer: Optional[pyonmttok.Tokenizer] = None) -> None
vocab.add_token(token: str) -> None

vocab.resize(maximum_size: int = 0, minimum_frequency: int = 1) -> None


# Build a vocabulary from an iterator of lines.
# If a tokenizer is not set, the lines are split on spaces.
pyonmttok.build_vocab_from_lines(
    lines: Iterable[str],
    tokenizer: Optional[pyonmttok.Tokenizer] = None,
    maximum_size: int = 0,
    minimum_frequency: int = 1,
    special_tokens: Optional[List[str]] = None,
) -> pyonmttok.Vocab

# Build a vocabulary from an iterator of tokens.
pyonmttok.build_vocab_from_tokens(
    tokens: Iterable[str],
    maximum_size: int = 0,
    minimum_frequency: int = 1,
    special_tokens: Optional[List[str]] = None,
) -> pyonmttok.Vocab

Token API

The Token API allows to tokenize text into pyonmttok.Token objects. This API can be useful to apply some logics at the token level but still retain enough information to write the tokenization on disk or detokenize.

Example

>>> tokenizer = pyonmttok.Tokenizer("aggressive", joiner_annotate=True)
>>> tokens = tokenizer.tokenize("Hello World!", as_token_objects=True)
>>> tokens
[Token('Hello'), Token('World'), Token('!', join_left=True)]
>>> tokens[-1].surface
'!'
>>> tokenizer.serialize_tokens(tokens)[0]
['Hello', 'World', '■!']
>>> tokens[-1].surface = '.'
>>> tokenizer.serialize_tokens(tokens)[0]
['Hello', 'World', '■.']
>>> tokenizer.detokenize(tokens)
'Hello World.'

Interface

The pyonmttok.Token class has the following attributes:

  • surface: a string, the token value
  • type: a pyonmttok.TokenType value, the type of the token
  • join_left: a boolean, whether the token should be joined to the token on the left or not
  • join_right: a boolean, whether the token should be joined to the token on the right or not
  • preserve: a boolean, whether joiners and spacers can be attached to this token or not
  • features: a list of string, the features attached to the token
  • spacer: a boolean, whether the token is prefixed by a SentencePiece spacer or not (only set when using SentencePiece)
  • casing: a pyonmttok.Casing value, the casing of the token (only set when tokenizing with case_feature or case_markup)

The pyonmttok.TokenType enumeration is used to identify tokens that were split by a subword tokenization. The enumeration has the following values:

  • TokenType.WORD
  • TokenType.LEADING_SUBWORD
  • TokenType.TRAILING_SUBWORD

The pyonmttok.Casing enumeration is used to identify the original casing of a token that was lowercased by the case_feature or case_markup tokenization options. The enumeration has the following values:

  • Casing.LOWERCASE
  • Casing.UPPERCASE
  • Casing.MIXED
  • Casing.CAPITALIZED
  • Casing.NONE

The Tokenizer instances provide methods to serialize or deserialize Token objects:

# Serialize Token objects to strings that can be saved on disk.
tokenizer.serialize_tokens(
    tokens: List[pyonmttok.Token],
) -> Tuple[List[str], Optional[List[List[str]]]]

# Deserialize strings into Token objects.
tokenizer.deserialize_tokens(
    tokens: List[str],
    features: Optional[List[List[str]]] = None,
) -> List[pyonmttok.Token]

Utilities

Interface

# Returns True if the string has the placeholder format.
pyonmttok.is_placeholder(token: str)

# Sets the random seed for reproducible tokenization.
pyonmttok.set_random_seed(seed: int)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

pyonmttok-1.32.0-cp310-cp310-win_amd64.whl (13.9 MB view details)

Uploaded CPython 3.10Windows x86-64

pyonmttok-1.32.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (16.4 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ ARM64

pyonmttok-1.32.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (16.5 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.12+ x86-64

pyonmttok-1.32.0-cp310-cp310-macosx_10_9_x86_64.whl (14.1 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

pyonmttok-1.32.0-cp39-cp39-win_amd64.whl (13.9 MB view details)

Uploaded CPython 3.9Windows x86-64

pyonmttok-1.32.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (16.4 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ ARM64

pyonmttok-1.32.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (16.5 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.12+ x86-64

pyonmttok-1.32.0-cp39-cp39-macosx_10_9_x86_64.whl (14.1 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

pyonmttok-1.32.0-cp38-cp38-win_amd64.whl (13.9 MB view details)

Uploaded CPython 3.8Windows x86-64

pyonmttok-1.32.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (16.4 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ ARM64

pyonmttok-1.32.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (16.5 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.12+ x86-64

pyonmttok-1.32.0-cp38-cp38-macosx_10_9_x86_64.whl (14.1 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

pyonmttok-1.32.0-cp37-cp37m-win_amd64.whl (13.9 MB view details)

Uploaded CPython 3.7mWindows x86-64

pyonmttok-1.32.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (16.6 MB view details)

Uploaded CPython 3.7mmanylinux: glibc 2.17+ ARM64

pyonmttok-1.32.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (16.6 MB view details)

Uploaded CPython 3.7mmanylinux: glibc 2.12+ x86-64

pyonmttok-1.32.0-cp37-cp37m-macosx_10_9_x86_64.whl (14.1 MB view details)

Uploaded CPython 3.7mmacOS 10.9+ x86-64

pyonmttok-1.32.0-cp36-cp36m-win_amd64.whl (13.9 MB view details)

Uploaded CPython 3.6mWindows x86-64

pyonmttok-1.32.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (16.6 MB view details)

Uploaded CPython 3.6mmanylinux: glibc 2.17+ ARM64

pyonmttok-1.32.0-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (16.6 MB view details)

Uploaded CPython 3.6mmanylinux: glibc 2.12+ x86-64

pyonmttok-1.32.0-cp36-cp36m-macosx_10_9_x86_64.whl (14.1 MB view details)

Uploaded CPython 3.6mmacOS 10.9+ x86-64

File details

Details for the file pyonmttok-1.32.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: pyonmttok-1.32.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 13.9 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for pyonmttok-1.32.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 a523256adede3346e7d947e038c94a0930bc605476924fd7432d6769c4263ca1
MD5 6e43a4fcf0a1a3d6b8533e44266164b6
BLAKE2b-256 934ccc3aaa2cf1d8d40e19de4032c87fa5940f86b378678ac01ce1fa7e6d781f

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 4cbf015fa8ffa673183ec9a75e73f774540090993a1de1db0473a5e717bf239d
MD5 ca39b21922c90eb117f3a45cfed7399a
BLAKE2b-256 455360ac76ab257f92f9b25e5ce90733585260767f776de2a65a6451e2d18a04

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 531f1ec0a8d47d4f32402a7c62ed5a149a4d395b051eaf51427a181426940d34
MD5 da99751429be7fa85336d8279ce7c819
BLAKE2b-256 04c100b9a5db69f2d6c4d7c8773441cd2350e38bbe5f2cae41d19869e3e86f8e

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 a2f1bb82fe64e52e61b8ece894d60784e736713f9e014c3f8b24ed57d93c3422
MD5 85d9ccddc0a8a5a4ac2bd52b03ad2d36
BLAKE2b-256 d14bace45bdb8c31442eea6bf8c5f7e4747b9139c5c5b366d9bb50215bd01845

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: pyonmttok-1.32.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 13.9 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for pyonmttok-1.32.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 061854f1b5e3d8e76883cecd05605cbd47f5496ddc0816332d869b689ee855ae
MD5 45afeafc984dd942687b0768f70b2985
BLAKE2b-256 c73849a95fa486203d8b27d6650b7854eb4f3c63015b0f2ae5f9be568b2d81e4

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 6fd3c8479ee6efab9ea50e570b01f16c72f878c29040dd0825b048c45e422560
MD5 e916084cb08305d2b1f203333034101f
BLAKE2b-256 c28ac032e23cfba2e5625e82556681e96844068d541b30d4ce9f2767e2cad971

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 39fdfa91d51f554cfddc24ac6d1af959df46984db1e95bf32c1b6183a2eedce7
MD5 b64d2a8599ca770929a9a32bf7da274c
BLAKE2b-256 79475d52d508fa8afdb250f24f7836aa038a696d223ab86eedefa19b0dc9676e

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 ae29a20388cb16a4d1e316a413f4f3d297c82359a1db71ffa43b9efbd03f7cc8
MD5 25e13ba84425a8c90ec323a3e0ec4681
BLAKE2b-256 23b79353b0ece8d49b2508120daff7e20b37e2210a8fc1ce591fa5d7e0d4425f

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: pyonmttok-1.32.0-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 13.9 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for pyonmttok-1.32.0-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 f2363a89a2d83f7a21165ba237b3efbaf3b72a87f001c1c9ad80d89f9d9a35a2
MD5 86e5aa23a8767e682d2cc8f69db854be
BLAKE2b-256 51c03833aea3678abeae6fc2ba8e3649cfea70113034539403b0d5fb2c8472b7

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 eb02c9b559aa0e2cc8f736d3e9bcd13894674ce215639d64811df2a248083cc0
MD5 f98629214c676a27439c229aa1f0b9c1
BLAKE2b-256 8a038db0ba2e2ed5e80fab9c467dcc3dad9a9a31efd2632d52082590328e4ba2

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 cca144ac563bf5bbc1811f45b8f09b8afaf02812eb7a641607f32b69e6fb0ee2
MD5 abad8a1c37ab834cbb9a4c865df06727
BLAKE2b-256 6ad0e1fc092f7e3aa6375f4a4186855133a417cd6f1e98bee2a1cb24d023fdc1

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 c2f80c075bc77b6b5eaa3302edc7182a7f37016e3e76209c5e7fda5a983d2fe8
MD5 8c0a5b8e2cc135af5304bfb8e93b9873
BLAKE2b-256 c448f0e9875f0b818610b077979dc302ee5496d04868d88c36d4c7c4fd622116

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: pyonmttok-1.32.0-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 13.9 MB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for pyonmttok-1.32.0-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 921e53268a6f69466055aa09fa939c0af681a8bf54318c77bbb4df63e658a503
MD5 9ccadd7d940ff8f0b43054ea4edbb9cf
BLAKE2b-256 4e9239be120135f8a86b776c51dbcd34884ac02da5dbff5dbd740fa6ee2cb5e7

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 c58f2e78c19d22ac801a765c9dd2ad07c05c32df8256a1ccc4d0187ce2996d7e
MD5 9d02e174cc71a205d03ecacad57374a3
BLAKE2b-256 90779906dd68021eedbefc5da24dd30b6542742c0e192cd21b8e1f00d6542707

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 09aa0154bd6f66558384065294d88e26c0236aa5629d157b23dbf3e16c359ddf
MD5 7725dc20cb56f81b91f207510f9d9305
BLAKE2b-256 5ec978b1d23574235c053a665c9d7e3d76d0bfa8af0624a6a356b6796eccd13e

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp37-cp37m-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp37-cp37m-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 8b0a24eef2fe948c258979310452818499a1565906a8e15059463ffe8a546615
MD5 3e425b2ef90cdba44bc8e14e1f0387ff
BLAKE2b-256 680f474c9faeea116ce11e4d969381312a2ef7c65dcb38a409985937a016b5bf

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp36-cp36m-win_amd64.whl.

File metadata

  • Download URL: pyonmttok-1.32.0-cp36-cp36m-win_amd64.whl
  • Upload date:
  • Size: 13.9 MB
  • Tags: CPython 3.6m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for pyonmttok-1.32.0-cp36-cp36m-win_amd64.whl
Algorithm Hash digest
SHA256 6fdbae106dcd5b247d833984747890e14969a18c29109dbe346452f73370ca30
MD5 1f8b8a587fcf408f54784de5d6394c9c
BLAKE2b-256 2bf8a2993de05744440a4a60bdef2150d137e7026eb6efba00c54d88e5a0dd94

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 c76b412038e0c590177e8ed1b188d1c10500eca3e2e326f947e0ac375d19b1a9
MD5 fcab04054b59131f9a0e321645ebafb2
BLAKE2b-256 4b7df5bfb86c7f901a1635f61fe1a228df234c55fcca290cdb88ef00e41f4a10

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 c6779adfb4c8280e15e2e8815084e8fd31f85daaf1e56b7816a83ceb8ae888a6
MD5 fd2b8020ca529c2e9915994f91d9a7a5
BLAKE2b-256 8db2ac4712f2b0e7711d0a7b4ab274819c6e829e85321627895130ca9cc05433

See more details on using hashes here.

File details

Details for the file pyonmttok-1.32.0-cp36-cp36m-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.32.0-cp36-cp36m-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 833c2e1aaee99ff83386bfa19b58f076a25d4a23bc3d269fa96c3c9b72562766
MD5 bb5984231069c8b54af86695dbef0d29
BLAKE2b-256 3b0f8e5d4a5aa1da06b924d26858a277d5c465cd4b43826be8480113ad22f547

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page