Skip to main content

Fast and customizable text tokenization library with BPE and SentencePiece support

Project description

pyonmttok

pyonmttok is the Python wrapper for OpenNMT/Tokenizer, a fast and customizable text tokenization library with BPE and SentencePiece support.

Installation:

pip install pyonmttok

Requirements:

  • OS: Linux, macOS, Windows
  • Python version: >= 3.6
  • pip version: >= 19.0

Table of contents

  1. Tokenization
  2. Subword learning
  3. Vocabulary
  4. Token API
  5. Utilities

Tokenization

Example

>>> import pyonmtok
>>> tokenizer = pyonmttok.Tokenizer("aggressive", joiner_annotate=True)
>>> tokens = tokenizer("Hello World!")
>>> tokens
['Hello', 'World', '■!']
>>> tokenizer.detokenize(tokens)
'Hello World!'

Interface

Constructor

tokenizer = pyonmttok.Tokenizer(
    mode: str,
    *,
    lang: Optional[str] = None,
    bpe_model_path: Optional[str] = None,
    bpe_dropout: float = 0,
    vocabulary_path: Optional[str] = None,
    vocabulary_threshold: int = 0,
    sp_model_path: Optional[str] = None,
    sp_nbest_size: int = 0,
    sp_alpha: float = 0.1,
    joiner: str = "■",
    joiner_annotate: bool = False,
    joiner_new: bool = False,
    support_prior_joiners: bool = False,
    spacer_annotate: bool = False,
    spacer_new: bool = False,
    case_feature: bool = False,
    case_markup: bool = False,
    soft_case_regions: bool = False,
    no_substitution: bool = False,
    with_separators: bool = False,
    preserve_placeholders: bool = False,
    preserve_segmented_tokens: bool = False,
    segment_case: bool = False,
    segment_numbers: bool = False,
    segment_alphabet_change: bool = False,
    segment_alphabet: Optional[List[str]] = None,
)

# SentencePiece-compatible tokenizer.
tokenizer = pyonmttok.SentencePieceTokenizer(
    model_path: str,
    vocabulary_path: Optional[str] = None,
    vocabulary_threshold: int = 0,
    nbest_size: int = 0,
    alpha: float = 0.1,
)

# Copy constructor.
tokenizer = pyonmttok.Tokenizer(tokenizer: pyonmttok.Tokenizer)

# Return the tokenization options (excluding options related to subword).
tokenizer.options

See the documentation for a description of each tokenization option.

Tokenization

# Tokenize a text.
# When training=False, subword regularization such as BPE dropout is disabled.
tokenizer.__call__(text: str, training: bool = True) -> List[str]

# Tokenize a text and return optional features.
# When as_token_objects=True, the method returns Token objects (see below).
tokenizer.tokenize(
    text: str,
    as_token_objects: bool = False,
    training: bool = True,
) -> Union[Tuple[List[str], Optional[List[List[str]]]], List[pyonmttok.Token]]

# Tokenize a batch of text.
tokenizer.tokenize_batch(
    batch_text: List[str],
    as_token_objects: bool = False,
    training: bool = True,
) -> Union[Tuple[List[List[str]], List[Optional[List[List[str]]]]], List[List[pyonmttok.Token]]]

# Tokenize a file.
tokenizer.tokenize_file(
    input_path: str,
    output_path: str,
    num_threads: int = 1,
    verbose: bool = False,
    training: bool = True,
    tokens_delimiter: str = " ",
)

Detokenization

# The detokenize method converts a list of tokens back to a string.
tokenizer.detokenize(
    tokens: List[str],
    features: Optional[List[List[str]]] = None,
) -> str
tokenizer.detokenize(tokens: List[pyonmttok.Token]) -> str

# The detokenize_with_ranges method also returns a dictionary mapping a token
# index to a range in the detokenized text.
# Set merge_ranges=True to merge consecutive ranges, e.g. subwords of the same
# token in case of subword tokenization.
# Set unicode_ranges=True to return ranges over Unicode characters instead of bytes.
tokenizer.detokenize_with_ranges(
    tokens: Union[List[str], List[pyonmttok.Token]],
    merge_ranges: bool = False,
    unicode_ranges: bool = False,
) -> Tuple[str, Dict[int, Tuple[int, int]]]

# Detokenize a file.
tokenizer.detokenize_file(
    input_path: str,
    output_path: str,
    tokens_delimiter: str = " ",
)

Subword learning

Example

The Python wrapper supports BPE and SentencePiece subword learning through a common interface:

1. Create the subword learner with the tokenization you want to apply, e.g.:

# BPE is trained and applied on the tokenization output before joiner (or spacer) annotations.
tokenizer = pyonmttok.Tokenizer("aggressive", joiner_annotate=True, segment_numbers=True)
learner = pyonmttok.BPELearner(tokenizer=tokenizer, symbols=32000)

# SentencePiece can learn from raw sentences so a tokenizer in not required.
learner = pyonmttok.SentencePieceLearner(vocab_size=32000, character_coverage=0.98)

2. Feed some raw data:

# Feed detokenized sentences:
learner.ingest("Hello world!")
learner.ingest("How are you?")

# or detokenized text files:
learner.ingest_file("/data/train1.en")
learner.ingest_file("/data/train2.en")

3. Start the learning process:

tokenizer = learner.learn("/data/model-32k")

The returned tokenizer instance can be used to apply subword tokenization on new data.

Interface

# See https://github.com/rsennrich/subword-nmt/blob/master/subword_nmt/learn_bpe.py
# for argument documentation.
learner = pyonmttok.BPELearner(
    tokenizer: Optional[pyonmttok.Tokenizer] = None,  # Defaults to tokenization mode "space".
    symbols: int = 10000,
    min_frequency: int = 2,
    total_symbols: bool = False,
)

# See https://github.com/google/sentencepiece/blob/master/src/spm_train_main.cc
# for available training options.
learner = pyonmttok.SentencePieceLearner(
    tokenizer: Optional[pyonmttok.Tokenizer] = None,  # Defaults to tokenization mode "none".
    keep_vocab: bool = False,  # Keep the generated vocabulary (model_path will act like model_prefix in spm_train)
    **training_options,
)

learner.ingest(text: str)
learner.ingest_file(path: str)
learner.ingest_token(token: Union[str, pyonmttok.Token])

learner.learn(model_path: str, verbose: bool = False) -> pyonmttok.Tokenizer

Vocabulary

Example

tokenizer = pyonmttok.Tokenizer("aggressive", joiner_annotate=True)

with open("train.txt") as train_file:
    vocab = pyonmttok.build_vocab_from_lines(
        train_file,
        tokenizer=tokenizer,
        maximum_size=32000,
        special_tokens=["<blank>", "<unk>", "<s>", "</s>"],
    )

with open("vocab.txt", "w") as vocab_file:
    for token in vocab.ids_to_tokens:
        vocab_file.write("%s\n" % token)

Interface

# Special tokens are added with ids 0, 1, etc., and are never removed by a resize.
vocab = pyonmttok.Vocab(special_tokens: Optional[List[str]] = None)

# Read-only properties.
vocab.tokens_to_ids -> Dict[str, int]
vocab.ids_to_tokens -> List[str]
vocab.counters -> List[int]

# Get or set the ID returned for out-of-vocabulary tokens.
# By default, it is the ID of the token <unk> if present in the vocabulary, len(vocab) otherwise.
vocab.default_id -> int

vocab.lookup_token(token: str) -> int
vocab.lookup_index(index: int) -> str

# Calls lookup_token on a batch of tokens.
vocab.__call__(tokens: List[str]) -> List[int]

vocab.__len__() -> int                  # Implements: len(vocab)
vocab.__contains__(token: str) -> bool  # Implements: "hello" in vocab
vocab.__getitem__(token: str) -> int    # Implements: vocab["hello"]

# Add tokens to the vocabulary after tokenization.
# If a tokenizer is not set, the text is split on spaces.
vocab.add_from_text(text: str, tokenizer: Optional[pyonmttok.Tokenizer] = None) -> None
vocab.add_from_file(path: str, tokenizer: Optional[pyonmttok.Tokenizer] = None) -> None
vocab.add_token(token: str) -> None

vocab.resize(maximum_size: int = 0, minimum_frequency: int = 1) -> None


# Build a vocabulary from an iterator of lines.
# If a tokenizer is not set, the lines are split on spaces.
pyonmttok.build_vocab_from_lines(
    lines: Iterable[str],
    tokenizer: Optional[pyonmttok.Tokenizer] = None,
    maximum_size: int = 0,
    minimum_frequency: int = 1,
    special_tokens: Optional[List[str]] = None,
) -> pyonmttok.Vocab

# Build a vocabulary from an iterator of tokens.
pyonmttok.build_vocab_from_tokens(
    tokens: Iterable[str],
    maximum_size: int = 0,
    minimum_frequency: int = 1,
    special_tokens: Optional[List[str]] = None,
) -> pyonmttok.Vocab

Token API

The Token API allows to tokenize text into pyonmttok.Token objects. This API can be useful to apply some logics at the token level but still retain enough information to write the tokenization on disk or detokenize.

Example

>>> tokenizer = pyonmttok.Tokenizer("aggressive", joiner_annotate=True)
>>> tokens = tokenizer.tokenize("Hello World!", as_token_objects=True)
>>> tokens
[Token('Hello'), Token('World'), Token('!', join_left=True)]
>>> tokens[-1].surface
'!'
>>> tokenizer.serialize_tokens(tokens)[0]
['Hello', 'World', '■!']
>>> tokens[-1].surface = '.'
>>> tokenizer.serialize_tokens(tokens)[0]
['Hello', 'World', '■.']
>>> tokenizer.detokenize(tokens)
'Hello World.'

Interface

The pyonmttok.Token class has the following attributes:

  • surface: a string, the token value
  • type: a pyonmttok.TokenType value, the type of the token
  • join_left: a boolean, whether the token should be joined to the token on the left or not
  • join_right: a boolean, whether the token should be joined to the token on the right or not
  • preserve: a boolean, whether joiners and spacers can be attached to this token or not
  • features: a list of string, the features attached to the token
  • spacer: a boolean, whether the token is prefixed by a SentencePiece spacer or not (only set when using SentencePiece)
  • casing: a pyonmttok.Casing value, the casing of the token (only set when tokenizing with case_feature or case_markup)

The pyonmttok.TokenType enumeration is used to identify tokens that were split by a subword tokenization. The enumeration has the following values:

  • TokenType.WORD
  • TokenType.LEADING_SUBWORD
  • TokenType.TRAILING_SUBWORD

The pyonmttok.Casing enumeration is used to identify the original casing of a token that was lowercased by the case_feature or case_markup tokenization options. The enumeration has the following values:

  • Casing.LOWERCASE
  • Casing.UPPERCASE
  • Casing.MIXED
  • Casing.CAPITALIZED
  • Casing.NONE

The Tokenizer instances provide methods to serialize or deserialize Token objects:

# Serialize Token objects to strings that can be saved on disk.
tokenizer.serialize_tokens(
    tokens: List[pyonmttok.Token],
) -> Tuple[List[str], Optional[List[List[str]]]]

# Deserialize strings into Token objects.
tokenizer.deserialize_tokens(
    tokens: List[str],
    features: Optional[List[List[str]]] = None,
) -> List[pyonmttok.Token]

Utilities

Interface

# Returns True if the string has the placeholder format.
pyonmttok.is_placeholder(token: str)

# Sets the random seed for reproducible tokenization.
pyonmttok.set_random_seed(seed: int)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

pyonmttok-1.33.0-cp310-cp310-win_amd64.whl (13.9 MB view details)

Uploaded CPython 3.10Windows x86-64

pyonmttok-1.33.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (16.4 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.17+ ARM64

pyonmttok-1.33.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (16.5 MB view details)

Uploaded CPython 3.10manylinux: glibc 2.12+ x86-64

pyonmttok-1.33.0-cp310-cp310-macosx_11_0_arm64.whl (13.8 MB view details)

Uploaded CPython 3.10macOS 11.0+ ARM64

pyonmttok-1.33.0-cp310-cp310-macosx_10_9_x86_64.whl (14.1 MB view details)

Uploaded CPython 3.10macOS 10.9+ x86-64

pyonmttok-1.33.0-cp39-cp39-win_amd64.whl (13.9 MB view details)

Uploaded CPython 3.9Windows x86-64

pyonmttok-1.33.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (16.4 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.17+ ARM64

pyonmttok-1.33.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (16.5 MB view details)

Uploaded CPython 3.9manylinux: glibc 2.12+ x86-64

pyonmttok-1.33.0-cp39-cp39-macosx_11_0_arm64.whl (13.8 MB view details)

Uploaded CPython 3.9macOS 11.0+ ARM64

pyonmttok-1.33.0-cp39-cp39-macosx_10_9_x86_64.whl (14.1 MB view details)

Uploaded CPython 3.9macOS 10.9+ x86-64

pyonmttok-1.33.0-cp38-cp38-win_amd64.whl (13.9 MB view details)

Uploaded CPython 3.8Windows x86-64

pyonmttok-1.33.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (16.4 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.17+ ARM64

pyonmttok-1.33.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (16.5 MB view details)

Uploaded CPython 3.8manylinux: glibc 2.12+ x86-64

pyonmttok-1.33.0-cp38-cp38-macosx_11_0_arm64.whl (13.8 MB view details)

Uploaded CPython 3.8macOS 11.0+ ARM64

pyonmttok-1.33.0-cp38-cp38-macosx_10_9_x86_64.whl (14.1 MB view details)

Uploaded CPython 3.8macOS 10.9+ x86-64

pyonmttok-1.33.0-cp37-cp37m-win_amd64.whl (13.9 MB view details)

Uploaded CPython 3.7mWindows x86-64

pyonmttok-1.33.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (16.6 MB view details)

Uploaded CPython 3.7mmanylinux: glibc 2.17+ ARM64

pyonmttok-1.33.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (16.6 MB view details)

Uploaded CPython 3.7mmanylinux: glibc 2.12+ x86-64

pyonmttok-1.33.0-cp37-cp37m-macosx_10_9_x86_64.whl (14.1 MB view details)

Uploaded CPython 3.7mmacOS 10.9+ x86-64

pyonmttok-1.33.0-cp36-cp36m-win_amd64.whl (13.9 MB view details)

Uploaded CPython 3.6mWindows x86-64

pyonmttok-1.33.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (16.6 MB view details)

Uploaded CPython 3.6mmanylinux: glibc 2.17+ ARM64

pyonmttok-1.33.0-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (16.6 MB view details)

Uploaded CPython 3.6mmanylinux: glibc 2.12+ x86-64

pyonmttok-1.33.0-cp36-cp36m-macosx_10_9_x86_64.whl (14.1 MB view details)

Uploaded CPython 3.6mmacOS 10.9+ x86-64

File details

Details for the file pyonmttok-1.33.0-cp310-cp310-win_amd64.whl.

File metadata

  • Download URL: pyonmttok-1.33.0-cp310-cp310-win_amd64.whl
  • Upload date:
  • Size: 13.9 MB
  • Tags: CPython 3.10, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for pyonmttok-1.33.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 81c71cd33b0512073c35557cf29f0a06ea9483b0fdbb1394c790398b46fad698
MD5 1b00952880e6f921ba3d91ea2d10a30e
BLAKE2b-256 22fea27bcd70be6c39ae28123d030002b63e60bb4feb79095b683202921bbfb6

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 d3e493c382103ae1ddc1c132e0fd115213b7a03de7bf806fc737a12ce4bf0f75
MD5 728dac3124f03027a0b59e260800ac15
BLAKE2b-256 b5b7144d51d68ee98380c8e556afbb2b3bd23b4721b43ac02f455171455cede4

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp310-cp310-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 79fefe32313fa066eb11b1e5f63382ffaf7ce01cc2eee594b3dcd97d976896df
MD5 225aee12c4bd02be802beb3782a0a465
BLAKE2b-256 aa23289219d4bc1b54d583d882a809dbe007a95685be30c8d24e5f16394f6c88

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 e5f62aaa5d584692a8be730de46353dd1ba0d68efa418ae51ca5dffb9cd4222e
MD5 624212404b0ee2c233117c49fd324ee5
BLAKE2b-256 2c8a719d3ac4e25eb777186a426fc4d8cbc3607982c5fe435ff06de42c180e4f

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp310-cp310-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp310-cp310-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 670f71c7da92cd7d97ca313a95081f29dcc7ca8778cb70e677dff7b674ba743b
MD5 a0c0014ae69ffa6f9c8d7ee6e4031833
BLAKE2b-256 68e0d211ddfc256b7be6c7a2ffcffff4d8b42ff147e44eab4426323e12385ca0

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp39-cp39-win_amd64.whl.

File metadata

  • Download URL: pyonmttok-1.33.0-cp39-cp39-win_amd64.whl
  • Upload date:
  • Size: 13.9 MB
  • Tags: CPython 3.9, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for pyonmttok-1.33.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 4fb63f5fca4abc9209d1339b0c5074085b27c03c05ecd8c5b4b10727eb5d6080
MD5 4b25900745309f9ad49bfea94f340066
BLAKE2b-256 ee6fd78d5913ab1956f021e703bc29f1dc24bb9cd97f5d98aa7ccb26038f739f

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 5e6bb6b5bb2987e8348a376310fe3883722d0b1592da5d64b6b7f73f958a2384
MD5 c6dc2977b314c0670b2868616929c530
BLAKE2b-256 cd5e64c863271197feda5bd5cdb273af0f0f15387012fb18ce4cad23c7cf456e

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp39-cp39-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 f1c21acd08c52a4e70f2aa70c4e4c85918fd37ce2e8ef2686b133a624397e60a
MD5 d6567063f201bef8b7ba47b2ec7ae196
BLAKE2b-256 ca35236d8caf6aa5f9f3352fc2ed73e07d48be11f119856050cb64fa74a4cdb2

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 9948d1f608d530c0fc6cc1cbc3823afc0d052a039712fcb2734a47801865ff32
MD5 54808f4042433ca2a7de71350078fd5d
BLAKE2b-256 1d56202f57952ace0acb077372b6b0d3c967390acd298ada39455ad80b1f2aad

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp39-cp39-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp39-cp39-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 a123b5777ac430a838b09c81b478c8df1b7658753a332ad945ca44a8caefd13c
MD5 d9fa3afafa76e45f6a56c3a15b61e04d
BLAKE2b-256 8776260b0cc060dcc6c1e91f9e87683af6831baf3448da5386062abb9b10c2ea

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp38-cp38-win_amd64.whl.

File metadata

  • Download URL: pyonmttok-1.33.0-cp38-cp38-win_amd64.whl
  • Upload date:
  • Size: 13.9 MB
  • Tags: CPython 3.8, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for pyonmttok-1.33.0-cp38-cp38-win_amd64.whl
Algorithm Hash digest
SHA256 e5377fda8aea735ab57f31c67f7147a436dc1364922f17d066dfa944694986fd
MD5 dc7f50a98a4a7bf3c889f3ab06b564bd
BLAKE2b-256 9053643caef7a618f21e2d8c888d2793883a505fbd7dd6a2614c867c4484992d

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 971f1f02c3af1f98b9e478dda1d40855cbd2554e3184d8d472201b6e8a063957
MD5 7d784f29163474f71f250049884f3457
BLAKE2b-256 fc930ef2499ae123d6a6993c8449784e1cd7b669dd3388de81fde74bbf1a50d1

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp38-cp38-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 2b9ffcc47df0cbaaeee1eaa9b6f46ecae77b35260ce1a0bfba62562ea8a4e05a
MD5 5d9ee6e3a83e21b94815b72c7ea29a32
BLAKE2b-256 5db36e29af4be4cfc5bf7a8f82b42d82258c99524f72514857ae747e837fabe1

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp38-cp38-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp38-cp38-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 bde3ad3c7ef60311c2a878fc5211ee1755eece246cc823269b4e87590f628474
MD5 16b7af0bec253f30d42390155727f3c5
BLAKE2b-256 fd814a44bed2913c72c52b119e1900dbc9b3c2b6c71e933c4b38e6d3e92e6947

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp38-cp38-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp38-cp38-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 7119c8f30383fc93a7eea213c9060ab9df0552aa68cbb9b200eb102b8c6d9014
MD5 8212531cf0c2dfdf44450744a6c2e2a4
BLAKE2b-256 c17293343f16ba3a28b87c268d907db4137e3a488b0a453352f921d980b97519

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp37-cp37m-win_amd64.whl.

File metadata

  • Download URL: pyonmttok-1.33.0-cp37-cp37m-win_amd64.whl
  • Upload date:
  • Size: 13.9 MB
  • Tags: CPython 3.7m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for pyonmttok-1.33.0-cp37-cp37m-win_amd64.whl
Algorithm Hash digest
SHA256 9afbf71fdc6473f9e81da20d63fd80a02c05366e497ef010989a2c6065ea3328
MD5 78c6971c12efaf0230b65ac69b24f434
BLAKE2b-256 dbe44855adf71ba9ad20cc75ddfa9276e4cb02d0e68a7a1ff168469d5432f5e6

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 2ee4179ce44ff080d3bd0ed8d6fb97dd7040fb8d36e819bdab8b1fb13f699415
MD5 6877646339d9dfbfd6b872b8265b7b30
BLAKE2b-256 8faeeb64c47a7dfa71ff03767e6b237e9f82c7a0616fa8972723611e0d0bc851

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 eeb47714c3fa4fc242996bc6c1c4e6416712a815f8a39019b5bc51a90ed083cb
MD5 d9b86ef95523e4fcf72e9683e3058935
BLAKE2b-256 401b269787c8a3d1d03f7d8c91d34c7739e8c6cacc355544ec98359f492f8bfe

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp37-cp37m-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp37-cp37m-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 a91a128113509e28b7aded51cde3512fec005699a38af01f27680741a91acb34
MD5 1b18438063efe68c95fee658c253f369
BLAKE2b-256 eb95dbd41bb7b5f3e1e0f234bf5e6e7b2129906fe4248cff08e2d4235fd02e45

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp36-cp36m-win_amd64.whl.

File metadata

  • Download URL: pyonmttok-1.33.0-cp36-cp36m-win_amd64.whl
  • Upload date:
  • Size: 13.9 MB
  • Tags: CPython 3.6m, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for pyonmttok-1.33.0-cp36-cp36m-win_amd64.whl
Algorithm Hash digest
SHA256 912944aa650001a6c6729f1825ff485dcf57e2c743fae648f0cbd7afb0a86c0d
MD5 8b75f4ccd15b017408aea21e2946bc45
BLAKE2b-256 f4f78efabb8efae6d763d8a8319f7b108349271aae74b2562abb1065734bc0e4

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 8d281728f43fea725f93af711eed5a54a19d28f03fde0a695298e2f172509c4a
MD5 b5788f4e67da886acd5d38a7ccc32e3e
BLAKE2b-256 f2a27db84ac58e17525842fd01721f89a36dffdd103c4bdec36e3bebcddbf259

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp36-cp36m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
Algorithm Hash digest
SHA256 34a040d2cda75ad1cfe2f071cdf0d1913ddbb24167968f7bed7220354dde9541
MD5 1f6af82f698752e5c4b60cef23d2fc37
BLAKE2b-256 4633752670cf447ed327c275b95bf537810fd73c2b56701b58d6459656568e10

See more details on using hashes here.

File details

Details for the file pyonmttok-1.33.0-cp36-cp36m-macosx_10_9_x86_64.whl.

File metadata

File hashes

Hashes for pyonmttok-1.33.0-cp36-cp36m-macosx_10_9_x86_64.whl
Algorithm Hash digest
SHA256 855b1953b7d57bb959a4fbf9545f53025aedeea750936e7502d1011d7cde9e0c
MD5 939cabe75f9f1c56b90f959655f162e0
BLAKE2b-256 6a2c3e34809019465d8bb3b101fdaeaddc8a341fd9f632feab4a1552c821f972

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page