Skip to main content

Library for manipulating the existing tokenizer.

Project description

Tokenizer-Changer

Python script for manipulating the existing tokenizer.

The solution was tested on Llama3-8B tokenizer.


Installation

Installation from PyPI:

pip install tokenizerchanger

Usage

changer = TokenizerChanger(tokenizer, space_sign)

Create the object of TokenizerChanger class that optionally requires an existing tokenizer and space sign, which differs from one tokenizer to another. The tokenizer could be PreTrainedTokenizerFast class from рџ¤— tokenizers library.

changer.load_tokenizer(tokenizer)

If you did not load the tokenizer with TokenizerChanger class declaration, you can load it using this function.

changer.set_space_sign(space_sign)

If you did not set the space sign with TokenizerChanger class declaration, you can set it using this function. Default space sign is Д .

Deletion

changer.delete_tokens(list_of_unwanted_tokens, include_substrings)

Deletes the unwanted tokens from the tokenizer. If include_substrings is True, all token occurrences will be deleted even in other tokens. Defaults to True.

changer.delete_k_least_frequent_tokens(k=1000)
changer.delete_k_least_frequent_tokens(k=1000, exclude=list_of_tokens)

Deletes k most frequent tokens. The exclude argument stands for tokens that will be ignored during the deletion of the least frequent tokens.

changer.delete_overlaps(vocab)

Finds and deletes all intersections of the tokenizer's vocabulary and the vocab variable from the tokenizer. Notice that vocab should be a dict variable.

changer.delete_inappropriate_merges(vocab)

Deletes all merges from tokenizer which contradict the vocab variable. Notice that vocab should be a list[str] variable.

Addition

The idea of creating such functions arose due to the fact that the built-in functions do not add tokens/merges properly, when some tokens are deleted. That is why you can get more tokens after encoding the same text, even if the necessary tokens have been added.

changer.add_tokens(list_of_tokens)

Adds the tokens from the list. The indexes will be filled automatically.

changer.add_merges(list_of_merges)

Adds the merges from the list. If there are no necessary tokens for this merge, their addition will be suggested.

"Get" functions

changer.get_overlapping_tokens(vocab)

Returns the intersection between the tokenizer's vocabulary and the vocab variable. Notice that vocab should be a dict variable.

changer.get_overlapping_merges(merges)

Returns the intersection between the tokenizer's merges and the merges variable. Notice that merges should be a list variable.

Saving

changer.save_tokenizer(path)

Saves the current state of the changed tokenizer. Additionally, it saves tokenizer configs into path folder (./updated_tokenizer by default).

tokenizer = ch.updated_tokenizer()

Return the changed tokenizer.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

TokenizerChanger-0.3.3.tar.gz (10.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

TokenizerChanger-0.3.3-py3-none-any.whl (10.4 kB view details)

Uploaded Python 3

File details

Details for the file TokenizerChanger-0.3.3.tar.gz.

File metadata

  • Download URL: TokenizerChanger-0.3.3.tar.gz
  • Upload date:
  • Size: 10.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.11

File hashes

Hashes for TokenizerChanger-0.3.3.tar.gz
Algorithm Hash digest
SHA256 34b9364c2653d5bac2fa1624f712fc8f7cdee521ec19a25d19a846684f911acd
MD5 c0d5bb2970d54a1c965ccc8541816d3e
BLAKE2b-256 c3a5180e1e725bed479eb9b50eef759ee15e15a0ea33b146e5f05c3ef7a62b5e

See more details on using hashes here.

File details

Details for the file TokenizerChanger-0.3.3-py3-none-any.whl.

File metadata

File hashes

Hashes for TokenizerChanger-0.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 1b473c2d0b31cad60b304d5a1393a3f49f2ec29bf482218fd238eea511cc4c17
MD5 2f3019ed7ebb11672689d6b9ec19ba61
BLAKE2b-256 6fe58a30e50545508371e2ff68edaab797d797b4b245b9868b50666fc47f3dbd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page