Skip to main content

Visualize HuggingFace Byte-Pair Encoding (BPE) Tokenizer encoding process

Project description

Hugging Face Byte-Pair Encoding tokenizer visualizer library

The library can help you visualize how the encoding process happens in the Byte-Pair Encoding tokenizer algorithm when you pass on your text content for tokenization.

Byte-Pair Encoding (BPE) was initially developed as an algorithm to compress texts, and then used by OpenAI for tokenization when pre-training the GPT model. It’s used by a lot of Transformer models, including GPT, GPT-2, RoBERTa, BART, and DeBERTa.

Byte-Pair Encoding tokenization

BPE training starts by computing the unique set of words used in the corpus (after the normalization and pre-tokenization steps are completed), then building the vocabulary by taking all the symbols used to write those words.

More about the algorithm here

tiktoken-img

Visualizing the Tokenization process

During the tokenization process the input content is compressed into the encoded IDs based on the trained BPE-Tokenizer. During the training process the token-pairs are merged into new token ID based on their frequency of existence in the training corpus.

This library helps in visualizing how the merging process looks like for a given string to be encoded. It generates a graph where the nodes are tokens / characters and if a pair of characters are merged, the nodes are connected via directed edges.

Using the Library

from hf_tokenizer import HfBPETokenizerVisualizer

visualizer = HfBPETokenizerVisualizer(
    pretrained_model_name="gpt2",
    save_visualization=True,
    file_type="png",
    file_name="bpe_tokenization_visualization",
    enable_debug=True,
)

encoded_ids = visualizer.encode("hello world")
print(encoded_ids)

Output Graph generated

generated graph

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hf_tokenizer_visualizer-0.0.1.tar.gz (3.4 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

hf_tokenizer_visualizer-0.0.1-py3-none-any.whl (4.9 kB view details)

Uploaded Python 3

File details

Details for the file hf_tokenizer_visualizer-0.0.1.tar.gz.

File metadata

  • Download URL: hf_tokenizer_visualizer-0.0.1.tar.gz
  • Upload date:
  • Size: 3.4 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.1

File hashes

Hashes for hf_tokenizer_visualizer-0.0.1.tar.gz
Algorithm Hash digest
SHA256 7ccc5e3ca9e5ab1051bfddb01616165aee34e25ec7b2e42728fe66b1866e9e01
MD5 f556d34ee7a337052a7b711f68ca290b
BLAKE2b-256 4499ef906159e3b083448ed130fb16e80d480993935f433869e8b81e6bea7c6d

See more details on using hashes here.

File details

Details for the file hf_tokenizer_visualizer-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for hf_tokenizer_visualizer-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 94317502f4a880e8bd1bc9a05e77030ae6ca1bf27910989720efbd5faca9e8fc
MD5 e4581ad4331e5c803ad53150588997a0
BLAKE2b-256 17e91a01981e284080d65457af85892f8911d8804f01e6f5fa9c06a49b5bdfb8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page