Skip to main content

🧰 The AutoTokenizer that TikToken always needed -- Load any tokenizer with TikToken now! ✨

Project description

AutoTikTokenizer Logo

AutoTikTokenizer

PyPI version Downloads Package size License Documentation Last Commit GitHub Stars

🚀 Accelerate your HuggingFace tokenizers by converting them to TikToken format with AutoTikTokenizer - get TikToken's speed while keeping HuggingFace's flexibility.

FeaturesInstallationExamplesSupported ModelsBenchmarksSharp BitsCitation

Key Features

  • 🚀 High Performance - Built on TikToken's efficient tokenization engine
  • 🔄 HuggingFace Compatible - Seamless integration with the HuggingFace ecosystem
  • 📦 Lightweight - Minimal dependencies, just TikToken and Huggingface-hub
  • 🎯 Easy to Use - Simple, intuitive API that works out of the box
  • 💻 Well Tested - Comprehensive test suite across supported models

Installation

Install autotiktokenizer from PyPI via the following command:

pip install autotiktokenizer

You can also install it from source, by the following command:

pip install git+https://github.com/bhavnicksm/autotiktokenizer

Examples

This section provides a basic usage example of the project. Follow these simple steps to get started quickly.

# step 1: Import the library
from autotiktokenizer import AutoTikTokenizer

# step 2: Load the tokenizer
tokenizer = AutoTikTokenizer.from_pretrained("meta-llama/Llama-3.2-3B-Instruct")

# step 3: Enjoy the Inferenece speed 🏎️
text = "Wow! I never thought I'd be able to use Llama on TikToken"
encodings = tokenizer.encode(text)

# (Optional) step 4: Decode the outputs
text = tokenizer.decode(encodings)

Supported Models

AutoTikTokenizer should ideally support ALL models on HF Hub but because of the vast diversity of models out there, we cannot test out every single model. These are the models we have already validated for, and know that AutoTikTokenizer works well for them. If you have a model you wish to see here, raise an issue and we would validate and add it to the list. Thanks :)

  • GPT2
  • GPT-J Family
  • SmolLM Family: Smollm2-135M, Smollm2-350M, Smollm2-1.5B etc.
  • LLaMa 3 Family: LLama-3.2-1B-Instruct, LLama-3.2-3B-Instruct, LLama-3.1-8B-Instruct etc.
  • Deepseek Family: Deepseek-v2.5 etc
  • Gemma2 Family: Gemma2-2b-It, Gemma2-9b-it etc
  • Mistral Family: Mistral-7B-Instruct-v0.3 etc
  • Aya Family: Aya-23B, Aya Expanse etc
  • BERT Family: BERT, RoBERTa, MiniLM, TinyBERT, DeBERTa etc.

NOTE: Some models use the unigram tokenizers, which are not supported with TikToken and hence, 🧰 AutoTikTokenizer cannot convert the tokenizers for such models. Some models that use unigram tokenizers include T5, ALBERT, Marian and XLNet.

Benchmarks

Benchmarking results for tokenizing 1 billion tokens from fineweb-edu dataset using Llama 3.2 tokenizer on CPU (Google colab)

Configuration Processing Type AutoTikTokenizer HuggingFace Speed Ratio
Single Thread Sequential 14:58 (898s) 40:43 (2443s) 2.72x faster
Batch x1 Batched 15:58 (958s) 10:30 (630s) 0.66x slower
Batch x4 Batched 8:00 (480s) 10:30 (630s) 1.31x faster
Batch x8 Batched 6:32 (392s) 10:30 (630s) 1.62x faster
4 Processes Parallel 2:34 (154s) 8:59 (539s) 3.50x faster

The above table shows that AutoTikTokenizer's tokenizer (TikToken) is actually way faster than HuggingFace's Tokenizer by 1.6-3.5 times under fair comparison! While, it's not making the most optimal use of TikToken (yet), its still way faster than the stock solutions you might be getting otherwise.

Sharp Bits

A known issue of the repository is that it does not do any pre-processing or post-processing, which means that if a certain tokenizer (like minilm) expect all lower-case letters only, then you would need to convert it to lower case manually. Similarly, any spaces added in the process are not removed during decoding, so they need to handle them on your own.

There might be more sharp bits to the repository which are unknown at the moment, please raise an issue if you encounter any!

Acknowledgement

Special thanks to HuggingFace and OpenAI for making their respective open-source libraries that make this work possible. I hope that they would continue to support the developer ecosystem for LLMs in the future!

If you found this repository useful, give it a ⭐️! Thank You :)

Citation

If you use autotiktokenizer in your research, please cite it as follows:

@misc{autotiktokenizer,
    author = {Bhavnick Minhas},
    title = {AutoTikTokenizer},
    year = {2024},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/bhavnicksm/autotiktokenizer}},
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autotiktokenizer-0.2.1.tar.gz (15.3 kB view details)

Uploaded Source

Built Distribution

autotiktokenizer-0.2.1-py3-none-any.whl (8.9 kB view details)

Uploaded Python 3

File details

Details for the file autotiktokenizer-0.2.1.tar.gz.

File metadata

  • Download URL: autotiktokenizer-0.2.1.tar.gz
  • Upload date:
  • Size: 15.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for autotiktokenizer-0.2.1.tar.gz
Algorithm Hash digest
SHA256 d74dc049262567d79a9517801efdd1e7345973d9e7ba4523af768b253f4febfc
MD5 6f007d83229929134a81c206a03ade79
BLAKE2b-256 277e8b9ef16d12657a9afb7fa07a03312e46d8b418e92912ca15df82d16bbc41

See more details on using hashes here.

Provenance

File details

Details for the file autotiktokenizer-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for autotiktokenizer-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 0b32feff5c294afaffc8245c6a9228b93c5607b245fb2a5aacfaa5fa6eb1fff5
MD5 ba92610482fde78c8403d15ec571d387
BLAKE2b-256 d725855e425f467beb1f9a4531740815a80066ecc505bd8415a1469391f25828

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page