Skip to main content

True UTF-8 tokenizer for byte level models

Project description

Back to Bytes: Revisiting Tokenization Through UTF-8

Python License arXiv

Full writeup can be found in our paper.

This module includes a real byte level tokenizer for text, which encodes text into a sequence of bytes (0-255). Unlike ByT5Tokenizer for example, UTF8Tokenizer is implemented from scratch, and is much more efficient.

Other "Byte Level" tokenizers usually include various additional "special tokens" (e.g., <pad>, <unk>, etc.), making the encoding and decoding logic more complex, and the token ids larger than 255.

Instead, we rely on C0 Control characters (0-31) as special tokens, which are not used in normal text.

Usage

pip install utf8-tokenizer

Tokenization:

from utf8_tokenizer.tokenizer import UTF8Tokenizer

tokenizer = UTF8Tokenizer()

texts = ["word", "or multiple"]
print(tokenizer(texts))

Chat Template:

from utf8_tokenizer.tokenizer import UTF8Tokenizer
from utf8_tokenizer.control import visualize_control_tokens

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hey, what's 1+1?"},
    {"role": "assistant", "content": "1+1 is 2."},
]

tokenizer = UTF8Tokenizer()
text = tokenizer.apply_chat_template(messages, tokenize=False)

# Visualize the text with special tokens
print(visualize_control_tokens(text))

Bit-biased byte embeddings:

from transformers import AutoModelForCausalLM

# Load example model
model = AutoModelForCausalLM.from_pretrained("sbintuitions/tiny-lm")
model.resize_token_embeddings(256)

from utf8_tokenizer.embeddings import patch_embedding_layers, join_embedding_layers

patch_embedding_layers(model) # Apply bit-bias for training

#
# Train your model...
#

join_embedding_layers(model) # Fold to a single embedding layer for inference

UTF-8 Validation during Generation:

from transformers import AutoModelForCausalLM
from utf8_tokenizer import UTF8Tokenizer, UTF8ValidationLogitsProcessor

# Load your byte-level model
model = AutoModelForCausalLM.from_pretrained("your-model")
tokenizer = UTF8Tokenizer()

# Create the UTF-8 validation processor
utf8_processor = UTF8ValidationLogitsProcessor()

# Generate text with UTF-8 validation
input_text = "Hello"
input_ids = tokenizer(input_text, return_tensors="pt").input_ids

outputs = model.generate(
    input_ids,
    logits_processor=[utf8_processor],  # Ensures valid UTF-8 sequences
    max_new_tokens=100
)

# Decode the output
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)

The UTF8ValidationLogitsProcessor prevents byte-level tokenizers from generating malformed UTF-8 sequences by masking invalid byte continuations during generation. This addresses the issue discussed in Firestone et al. 2024 where byte-level tokenizers can generate ill-formed UTF-8.

Benchmark

Tokenization Speed

python experiments/benchmark.py

On MacBook Pro, with Apple M4 Pro chip, just converting texts of 6 words in different languages to bytes, without wrapping them in tensors, creating attention masks, or padding, runs at 127.4k/sec.

Calling the ByT5 tokenizer runs at 6.2k/sec. When we call our new tokenizer, through the __call__ path, we get 10.5k/sec, which is a bit faster.

Our optimized version with zero-copy runs at 86.7k/sec, where the loss of performance compared to the raw ints is in padding the input ids into a properly padded tensor. This is a 14x speedup over the original tokenizer.

Bit-Biased Byte Embedding

We train a small language model with and without bit-bias.

Our results reveal that bit-bias improves both loss and accuracy, while increasing training time by about 1%. We hope that our bit-level embeddings module can be further optimized, to minimize the training overhead.

Cite

If you use this code in your research, please consider citing the work:

@misc{moryossef2025utf8,
  title         = {Back to Bytes: Revisiting Tokenization Through {UTF-8}},
  author        = {Amit Moryossef and Clara Meister and Pavel Stepachev and Desmond Elliott},
  howpublished  = {\url{https://github.com/sign/utf8-tokenizer}},
  eprint        = {2510.16987},
  archivePrefix = {arXiv},
  primaryClass  = {cs.CL},
  url           = {https://arxiv.org/abs/2510.16987}, 
  year          = {2025}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

utf8_tokenizer-0.2.0.tar.gz (21.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

utf8_tokenizer-0.2.0-py3-none-any.whl (14.2 kB view details)

Uploaded Python 3

File details

Details for the file utf8_tokenizer-0.2.0.tar.gz.

File metadata

  • Download URL: utf8_tokenizer-0.2.0.tar.gz
  • Upload date:
  • Size: 21.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for utf8_tokenizer-0.2.0.tar.gz
Algorithm Hash digest
SHA256 ee24114022c9f93f14a476b6b9c0a986a1aad4700ab978c5bcaee52f849a7f7c
MD5 eec54435c57c4da414483e928637e2b7
BLAKE2b-256 caf6b605574a82bc4b0c0def9941537517ae388d2926289dc8c5cb39befc1337

See more details on using hashes here.

Provenance

The following attestation bundles were made for utf8_tokenizer-0.2.0.tar.gz:

Publisher: release.yaml on sign/utf8-tokenizer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file utf8_tokenizer-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: utf8_tokenizer-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 14.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for utf8_tokenizer-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c777a68ddc1e6324e32b3fc3e59fe2fe693558e58fb29a5ffa7f035922eb92bd
MD5 4f0a1e732516a8431594225750aef2ef
BLAKE2b-256 c1d8bc1fb3b5c347f077f0bba90d3b4e9afec574e306ad8ff06630c1abd1cc8a

See more details on using hashes here.

Provenance

The following attestation bundles were made for utf8_tokenizer-0.2.0-py3-none-any.whl:

Publisher: release.yaml on sign/utf8-tokenizer

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page