Skip to main content

Datamodels for HF tokenizers

Project description

A skeleton smoking a cigarette.

Skeletoken

This package contains Pydantic datamodels that fully describe the tokenizer.json file used in transformers via Tokenizers. This is useful, because working with this format is complicated.

Rationale

In one sentence: Validate, edit, and transform Hugging Face tokenizers safely.

The Hugging Face tokenizers representation does not reliably allow you to edit tokenizers as a structured object. This means that complex changes to tokenizers require you to edit the tokenizer.json file manually. This is annoying, because the format of this file is complicated.

Furthermore, tokenizers does not give reasonable errors when parsing a tokenizer fails. It does give line/character numbers, but those point to the last character of the section where the parsing fails. For example, inserting an illegal vocabulary item just tells you that there is an issue in the vocabulary somewhere by pointing out the last character of the vocabulary as the place where the error occurs.

This package contains datamodels (pydantic datamodels) that contain the same constraints as the tokenizers package. In other words, if you can create a model in this package, the tokenizers package can parse it. This allows you to progressively edit tokenizer json files, all the while getting productive error messages.

Installation

Install it via pip

pip install skeletoken

What can it do?

skeletoken allows you to:

  • validate tokenizer.json with human-readable errors
  • edit tokenizers as typed objects (Pydantic)
  • apply common transformations (decasing, greedy merges, etc.)
  • auto-fix common inconsistencies
  • round-trip to tokenizers and transformers
  • apply tokenization changes to transformers, sentence-transformers and pylate models.

Example

Here's some examples of what skeletoken can do:

Autofixing a tokenizer

skeletoken autofixes any tokenizer you load. See automatic checks to see what gets fixed automatically. For example, the Qwen/Qwen3-0.6B tokenizer has a lot of special tokens that are not part of the regular tokenizer vocabulary. This leads to a mismatch between the size of a tokenizer and the number of tokens that tokenizer can produce. skeletoken adds these to the vocabulary automatically.

from transformers import AutoTokenizer
from skeletoken import TokenizerModel

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen3-0.6B")
# Mismatch due to missing special tokens
print(tokenizer.vocab_size)  # 151643
print(len(tokenizer))  # 151669

# Load a model from the hub.
tokenizer_model = TokenizerModel.from_pretrained("Qwen/Qwen3-0.6B")
# Convert the tokenizer to transformers
tokenizer = tokenizer_model.to_transformers()
# All missing special tokens have been added to the vocabulary
print(tokenizer.vocab_size)  # 151669
print(len(tokenizer))  # 151669

Adding components to a tokenizer

skeletoken can add components to a tokenizer. First we load one, and inspect it:

from skeletoken import TokenizerModel

# Directly pull a tokenizer from the hub
tokenizer_model = TokenizerModel.from_pretrained("gpt2")

print(tokenizer_model.model.type)
# ModelType.BPE
print(tokenizer_model.pre_tokenizer.type)
# PreTokenizerType.BYTELEVEL

We can then add a digit splitter to the tokenizer.

from skeletoken import TokenizerModel
from skeletoken.pretokenizers import DigitsPreTokenizer

model = TokenizerModel.from_pretrained("gpt2")
tok = model.to_tokenizer()

# Create the digits pretokenizer
digits = DigitsPreTokenizer(individual_digits=True)
model = model.add_pre_tokenizer(digits)

new_tok = model.to_tokenizer()
print(tok.encode("hello 123").tokens)
# ['hello', 'Ġ123']
print(new_tok.encode("hello 123").tokens)
# ['hello', 'Ġ', '1', '2', '3']

Decasing a tokenizer

For background, see this blogpost. Decasing is super easy using skeletoken.

from tokenizers import Tokenizer
from skeletoken import TokenizerModel

model_name = "intfloat/multilingual-e5-small"

tokenizer = Tokenizer.from_pretrained(model_name)

print([tokenizer.encode(x).tokens for x in ["Amsterdam", "amsterdam"]])
# [['<s>', '▁Amsterdam', '</s>'], ['<s>', '▁am', 'ster', 'dam', '</s>']]

model = TokenizerModel.from_pretrained(model_name)
model = model.decase_vocabulary()

lower_tokenizer = model.to_tokenizer()
print([lower_tokenizer.encode(x).tokens for x in ["Amsterdam", "amsterdam"]])
# [['<s>', '▁amsterdam', '</s>'], ['<s>', '▁amsterdam', '</s>']]

Making a tokenizer greedy

For background, see this blog post. Like decasing, turning any tokenizer into a greedy one is super easy using skeletoken.

from tokenizers import Tokenizer
from skeletoken import TokenizerModel

model_name = "gpt2"

tokenizer = Tokenizer.from_pretrained(model_name)

print([tokenizer.encode(x).tokens for x in [" hellooo", " bluetooth"]])
# [['Ġhell', 'ooo'], ['Ġblu', 'etooth']]

model = TokenizerModel.from_pretrained(model_name)
model = model.make_model_greedy()
greedy_tokenizer = model.to_tokenizer()
print([greedy_tokenizer.encode(x).tokens for x in [" hellooo", " bluetooth"]])
# [['Ġhello', 'oo'], ['Ġblue', 'too', 'th']]

Roadmap

Here's a rough roadmap:

  • ✅ Add automated lowercasing (see blog)
  • ✅ Add vocabulary changes + checks (e.g., check the merge table if a token is added)
  • ✅ Add helper functions for adding modules
  • ✅ Add secondary constraints (e.g., if an AddedToken refers to a vocabulary item does not exist, we should crash.)
  • ✅ Add a front end for the Hugging Face trainer
  • ✅ Add automatic model editing
  • Consistent tokenizer hashing: instantly know if two tokenizers implement the same thing.
  • Add a front end for sentencepiece training.

License

MIT

Author

Stéphan Tulkens

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

skeletoken-0.3.0.tar.gz (230.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

skeletoken-0.3.0-py3-none-any.whl (38.8 kB view details)

Uploaded Python 3

File details

Details for the file skeletoken-0.3.0.tar.gz.

File metadata

  • Download URL: skeletoken-0.3.0.tar.gz
  • Upload date:
  • Size: 230.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for skeletoken-0.3.0.tar.gz
Algorithm Hash digest
SHA256 d35c957e28a7484a9628752340928ba857fd44834ba2b528ffd3c18f088c9086
MD5 ab332a9731b339499e08ba99ba0e3f2b
BLAKE2b-256 7316c4b9107914b6ff0408a93fe330c59ff6f2deb4684d3932d9e1823ba71b0b

See more details on using hashes here.

File details

Details for the file skeletoken-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: skeletoken-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 38.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for skeletoken-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 88e5e2338ba871d2a888511469bbdffa876881aa46dd428759188bd1b5440426
MD5 a403256a504750c419a0b5b37c71295e
BLAKE2b-256 260ccda6fe8ce5e7eafac5ee9cdeec6d8ce832ff4e5a9576a40f043d03552d46

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page