True UTF-8 tokenizer for byte level models
Project description
Back to Bytes: Revisiting Tokenization Through UTF-8
Full writeup can be found in the paper.
This module includes a real byte level tokenizer for text, which encodes text into a sequence of bytes (0-255).
Unlike ByT5Tokenizer for example, UTF8Tokenizer is implemented from scratch, and is much more efficient.
Other "Byte Level" tokenizers usually include various additional "special tokens" (e.g., <pad>, <unk>, etc.),
making the encoding and decoding logic more complex, and the token ids larger than 255.
Instead, we rely on C0 Control characters (0-31) as special tokens, which are not used in normal text.
Usage
pip install utf8-tokenizer
Tokenization:
from utf8_tokenizer.tokenizer import UTF8Tokenizer
tokenizer = UTF8Tokenizer()
texts = ["word", "or multiple"]
print(tokenizer(texts))
Bit-biased byte embeddings:
from transformers import AutoModelForCausalLM
# Load example model
model = AutoModelForCausalLM.from_pretrained("sbintuitions/tiny-lm")
model.resize_token_embeddings(256)
from utf8_tokenizer.embeddings import patch_embedding_layers, join_embedding_layers
patch_embedding_layers(model) # Apply bit-bias for training
#
# Train your model...
#
join_embedding_layers(model) # Fold to a single embedding layer for inference
Benchmark
Tokenization Speed
python experiments/benchmark.py
On MacBook Pro, with Apple M4 Pro chip, just converting texts of 6 words in different languages to bytes, without wrapping them in tensors, creating attention masks, or padding, runs at 127.4k/sec.
Calling the ByT5 tokenizer runs at 6.2k/sec.
When we call our new tokenizer, through the __call__ path, we get 10.5k/sec, which is a bit faster.
Our optimized version with zero-copy runs at 86.7k/sec, where the loss of performance compared to the raw ints is in padding the input ids into a properly padded tensor. This is a 14x speedup over the original tokenizer.
Bit-Biased Byte Embedding
We train a small language model with and without bit-bias.
Our results reveal that bit-bias improves both loss and accuracy, while increasing training time by about 1%. We hope that our bit-level embeddings module can be further optimized, to minimize the training overhead.
Cite
If you use this code in your research, please consider citing the work:
@misc{moryossef2025utf8,
title={Back to Bytes: Revisiting Tokenization Through {UTF-8}},
author={Moryossef, Amit},
howpublished={\url{https://github.com/sign/utf8-tokenizer}},
year={2025}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file utf8_tokenizer-0.1.2.tar.gz.
File metadata
- Download URL: utf8_tokenizer-0.1.2.tar.gz
- Upload date:
- Size: 13.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
31aa9f04d514dbd11efc122b7afdb7cc1a3747e954cf8330c2d00b305877654a
|
|
| MD5 |
9a7817e8f952e37b91bed1f2a189bad3
|
|
| BLAKE2b-256 |
84eeebe6d20dd0522a927d22742b666bf5a0c8b8417c1df803b30137eee4868e
|
Provenance
The following attestation bundles were made for utf8_tokenizer-0.1.2.tar.gz:
Publisher:
release.yaml on sign/utf8-tokenizer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
utf8_tokenizer-0.1.2.tar.gz -
Subject digest:
31aa9f04d514dbd11efc122b7afdb7cc1a3747e954cf8330c2d00b305877654a - Sigstore transparency entry: 600007608
- Sigstore integration time:
-
Permalink:
sign/utf8-tokenizer@51de612292dc79193dc32fe651da56369779273a -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/sign
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yaml@51de612292dc79193dc32fe651da56369779273a -
Trigger Event:
release
-
Statement type:
File details
Details for the file utf8_tokenizer-0.1.2-py3-none-any.whl.
File metadata
- Download URL: utf8_tokenizer-0.1.2-py3-none-any.whl
- Upload date:
- Size: 9.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
980496e68eee5e49094dfe8a7d1d0ce574ebe0261b3eb9a7a7bf3142189b4ba6
|
|
| MD5 |
fecc8c4caacb53ee8564c18210cec496
|
|
| BLAKE2b-256 |
58fcf77740259a3ca08599e2d5488167c0487ebcd9748190969cda93b11db48f
|
Provenance
The following attestation bundles were made for utf8_tokenizer-0.1.2-py3-none-any.whl:
Publisher:
release.yaml on sign/utf8-tokenizer
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
utf8_tokenizer-0.1.2-py3-none-any.whl -
Subject digest:
980496e68eee5e49094dfe8a7d1d0ce574ebe0261b3eb9a7a7bf3142189b4ba6 - Sigstore transparency entry: 600007644
- Sigstore integration time:
-
Permalink:
sign/utf8-tokenizer@51de612292dc79193dc32fe651da56369779273a -
Branch / Tag:
refs/tags/v0.1.2 - Owner: https://github.com/sign
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yaml@51de612292dc79193dc32fe651da56369779273a -
Trigger Event:
release
-
Statement type: