Skip to main content

A custom tokeniser with a 131,072-token vocabulary derived from 0.5B (val) and 1B (val+test) tokens in SlimPajama. Uses a novel token generation algorithm and a dynamic programming-based segmentation method for fast, interpretable tokenisation, which can also be used for tokeniation on custom token maps.

Project description

📄 README.md

🧠 Custom Tokeniser Library

A high-performance, fully custom tokeniser built from scratch — no BPE, no existing NLP tokenisation scheme. This tokeniser is based on a unique algorithm developed independently and trained on over 1 billion tokens from the SlimPajama dataset (Val + Test), providing an efficient, interpretable, and extendable tokenisation pipeline.

🚀 What This Library Offers

  • Tokeniser built on a vocabulary of 131,072 tokens
  • Two versions of vocab:
    • 0.5B: Validation-only data
    • 1B: Validation + Test data
  • Token vocab built via a custom algorithm — no Byte Pair Encoding (BPE)
  • Tokenisation logic includes:
    • Token lookup from pre-generated token map
    • Dynamic programming-based segmentation for out-of-vocab tokens
    • One-hot encoding (NumPy or PyTorch)
    • Visualisation utilities for tokens and token IDs
  • Lightweight JSON format for token maps & token count maps
  • Ready for integration into any LLM pre-tokenisation pipeline

Note: Files (chunked less than 2GB) are stored on Hugging Face instead of GitHub due to LFS file size constraints. On GitHub (files chunked below 100MB) are available.

📦 Installation

pip install tokeniser-py

🛠 Usage

from tokeniser import Tokeniser

t = Tokeniser()
tokens, count = t.tokenise("Your input text here.")
token_ids = t.token_ids(tokens)

Use t.one_hot_tokens(token_ids) for NumPy-based one-hot encoding, or op='torch' for PyTorch.

📚 Data Sources

All token maps and token counts are generated from the SlimPajama dataset by Cerebras.

📁 Vocab Files

  • ordered_tokenizer_1b_val_test_data.json — Ordered tokens (1B data)
  • unordered_tokenizer_1b_val_test_data.json — Unordered tokens (1B)
  • count_tokenizer_1b_val_test_data.json — Token counts (1B)
  • (Similar structure for 0.5B val-only version)

📌 Design Philosophy

This tokeniser is built from scratch before learning existing algorithms like BPE. It is designed with the intent to understand, innovate, and compare with existing solutions from first principles.

Some parts may overlap with BPE/WordPiece in spirit — but the core algorithm was independently designed.

🤝 Contributions

Feel free to contribute anything via GitHub.

📖 License

MIT License

📄 CHANGELOG

📦 Changelog

[0.1.0] - 2025-03-22

Added

  • Initial release of custom tokeniser library
  • Tokeniser class with support for:
    • tokenise() using DP segmentation
    • Custom token map and count map loading
    • One-hot encoding support (NumPy & PyTorch)
    • Token and token ID visualisation functions
    • token_map(), token_count_map(), max_token_length() accessors
  • Full support for:
    • 0.5B val-only vocab
    • 1B val + test vocab
  • JSON-based token and count maps from SlimPajama corpus

[0.1.1] - 2025-03-22

Added

  • Changed default import code in README, showing class instance creation with default params.

[0.1.2] - 2025-03-22

Added

  • Changed the url to the actual GitHub repo.

[0.1.3] - 2025-04-04

Added

  • Changed the raise exception to raise value error, for better error message
  • Corrected the licence year to 2025.

Notes

  • Built on top of a custom token creation algorithm not based on any standard BPE/WordPiece method
  • SlimPajama dataset used for vocab extraction
  • Token count files are optimized to stay under 2GB for compatibility with Git LFS (and Hugging Face storage)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tokeniser-py-0.1.3.tar.gz (4.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tokeniser_py-0.1.3-py3-none-any.whl (4.9 MB view details)

Uploaded Python 3

File details

Details for the file tokeniser-py-0.1.3.tar.gz.

File metadata

  • Download URL: tokeniser-py-0.1.3.tar.gz
  • Upload date:
  • Size: 4.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.10.8

File hashes

Hashes for tokeniser-py-0.1.3.tar.gz
Algorithm Hash digest
SHA256 3be221727ece1895ecc3cdf25e7cd0cd7e0216d6020c776e3c7e72aa48e97b26
MD5 cba077f64eda5c09b00302951e62a56a
BLAKE2b-256 745e9a94af5fe070abce77dd2b32698c4db7516330699b2ce84a5ef3fc47fd34

See more details on using hashes here.

File details

Details for the file tokeniser_py-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: tokeniser_py-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 4.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.10.8

File hashes

Hashes for tokeniser_py-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 2085553cea8e05c73c635b138bc8c1b0a2250c6e79cb8b815ccb99d595378943
MD5 a0c5678b5f907930e058a02f49f5d9ee
BLAKE2b-256 9214123bf3c010d90cb99da36e44a26194014fdd068203fc5698d1acb9132b4e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page