Skip to main content

A custom tokeniser with a 131,072-token vocabulary derived from 0.5B (val) and 1B (val+test) tokens in SlimPajama. Uses a novel token generation algorithm and a dynamic programming-based segmentation method for fast, interpretable tokenisation, which can also be used for tokeniation on custom token maps.

Project description

📄 README.md

🧠 Custom Tokeniser Library

A high-performance, fully custom tokeniser built from scratch — no BPE, no existing NLP tokenisation scheme. This tokeniser is based on a unique algorithm developed independently and trained on over 1 billion tokens from the SlimPajama dataset (Val + Test), providing an efficient, interpretable, and extendable tokenisation pipeline.

🚀 What This Library Offers

  • Tokeniser built on a vocabulary of 131,072 tokens
  • Two versions of vocab:
    • 0.5B: Validation-only data
    • 1B: Validation + Test data
  • Token vocab built via a custom algorithm — no Byte Pair Encoding (BPE)
  • Tokenisation logic includes:
    • Token lookup from pre-generated token map
    • Dynamic programming-based segmentation for out-of-vocab tokens
    • One-hot encoding (NumPy or PyTorch)
    • Visualisation utilities for tokens and token IDs
  • Lightweight JSON format for token maps & token count maps
  • Ready for integration into any LLM pre-tokenisation pipeline

Note: Files (chunked less than 2GB) are stored on Hugging Face instead of GitHub due to LFS file size constraints. On GitHub (files chunked below 100MB) are available.

📦 Installation

pip install tokeniser-py

🛠 Usage

from tokeniser import Tokeniser

t = Tokeniser()
tokens, count = t.tokenise("Your input text here.")
token_ids = t.token_ids(tokens)

Use t.one_hot_tokens(token_ids) for NumPy-based one-hot encoding, or op='torch' for PyTorch.

📚 Data Sources

All token maps and token counts are generated from the SlimPajama dataset by Cerebras.

📁 Vocab Files

  • ordered_tokenizer_1b_val_test_data.json — Ordered tokens (1B data)
  • unordered_tokenizer_1b_val_test_data.json — Unordered tokens (1B)
  • count_tokenizer_1b_val_test_data.json — Token counts (1B)
  • (Similar structure for 0.5B val-only version)

📌 Design Philosophy

This tokeniser is built from scratch before learning existing algorithms like BPE. It is designed with the intent to understand, innovate, and compare with existing solutions from first principles.

Some parts may overlap with BPE/WordPiece in spirit — but the core algorithm was independently designed.

🤝 Contributions

Feel free to contribute anything via GitHub.

📖 License

MIT License

📄 CHANGELOG

📦 Changelog

[0.1.0] - 2025-03-22

Added

  • Initial release of custom tokeniser library
  • Tokeniser class with support for:
    • tokenise() using DP segmentation
    • Custom token map and count map loading
    • One-hot encoding support (NumPy & PyTorch)
    • Token and token ID visualisation functions
    • token_map(), token_count_map(), max_token_length() accessors
  • Full support for:
    • 0.5B val-only vocab
    • 1B val + test vocab
  • JSON-based token and count maps from SlimPajama corpus

[0.1.1] - 2025-03-22

Added

  • Changed default import code in README, showing class instance creation with default params.

[0.1.2] - 2025-03-22

Added

  • Changed the url to the actual GitHub repo.

Notes

  • Built on top of a custom token creation algorithm not based on any standard BPE/WordPiece method
  • SlimPajama dataset used for vocab extraction
  • Token count files are optimized to stay under 2GB for compatibility with Git LFS (and Hugging Face storage)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tokeniser-py-0.1.2.tar.gz (4.9 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tokeniser_py-0.1.2-py3-none-any.whl (4.9 MB view details)

Uploaded Python 3

File details

Details for the file tokeniser-py-0.1.2.tar.gz.

File metadata

  • Download URL: tokeniser-py-0.1.2.tar.gz
  • Upload date:
  • Size: 4.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.10.8

File hashes

Hashes for tokeniser-py-0.1.2.tar.gz
Algorithm Hash digest
SHA256 2ee69b8f4058f08c256ca6dbc5e5a8d60f5c40c979a270bd3765c10ed82ff862
MD5 938c2d972bb77e014de7731d42efa55e
BLAKE2b-256 a0db665741841f4ddc8dd054e668c0649ca05dda3f7e32d50166d39e6b8d7c1c

See more details on using hashes here.

File details

Details for the file tokeniser_py-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: tokeniser_py-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 4.9 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.10.8

File hashes

Hashes for tokeniser_py-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 c54ebf55e28eef438fda3464434cfdaadb3f32feb01442e4373a3247514ce497
MD5 a42b7873db67fd9dc47ab5702e5141db
BLAKE2b-256 f9055cfd9cbaab6658521833f26c5dafcb541a1a9c42ae4d22761383bc104207

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page