Skip to main content

Compact Unicode Token Encoding — a code-aware tokenizer that compresses sequences 35-45% with zero accuracy loss

Project description

CUTE Tokenizer Mascot

🐭 CUTE Tokenizer

Compact Unicode Token Encoding

— a tokenizer that nibbles your token costs —

Python 3.10+ License: MIT HuggingFace Compatible PyPI version CI


✨ Highlights

CUTE shrinks code sequences by 35–45% through a two-stage tokenization strategy:

  • Pre-encoding via Private-Use-Area Unicode — maps the most frequent words, operators, and identifier sub-parts to single compact characters
  • Residual byte-level BPE — handles everything else with standard subword tokenization

The result:

  • Faster inference — fewer tokens mean shorter sequence lengths and reduced latency
  • 💰 Lower API costs — pay for up to 45% fewer tokens per request
  • 🔁 Perfectly lossless round-trip — encode and decode with zero information loss

🧀 Quick Start

pip install cute-tokenizer

Train your own:

# Drop a few repos into ./corpus/, then:
cute build --corpus ./corpus --output ./output

Use it like any HF tokenizer:

from cute_tokenizer import CUTETokenizerFast

tok = CUTETokenizerFast(
    tokenizer_file="./output/tokenizer.json",
    cute_mapping_file="./output/cute_mapping.json",
)

ids = tok("def hello(): return 42", add_special_tokens=False).input_ids
text = tok.decode(ids, skip_special_tokens=True)
assert text == "def hello(): return 42"  # always lossless

Or via AutoTokenizer (after pushing to HF Hub):

from transformers import AutoTokenizer

tok = AutoTokenizer.from_pretrained("user/cute-py", trust_remote_code=True)

🔍 How It Works

  1. Count & select — scan code, count tokens with identifier sub-part boosting, take the smallest set covering 90% of the corpus.
  2. Assign PUA chars — map each chosen token to a unique Unicode Private-Use-Area codepoint, starting at U+E000. Skip codepoints that already appear in the corpus.
  3. Pre-tokenize — at encode time, substitute mapped tokens with their PUA chars (Aho-Corasick, O(n) in input length).
  4. BPE the rest — feed the residual through a standard byte-level BPE. The PUA chars are atomic vocab entries; they never get further split.
  5. Decode — the byte-level decoder reconstructs the substituted string; reverse-substitution restores the original text.

Round-trip is byte-equal for any input. We test this with Hypothesis on arbitrary Unicode plus a hand-curated corner-case suite (ZWJ emoji, BOM, control chars, mixed scripts, deep nesting, etc.).


📦 Project Layout

src/cute_tokenizer/
  config.py         # CUTEConfig — all knobs in one place
  patterns.py       # token regex + identifier splitter (uses `regex` module)
  corpus.py         # streaming ingest, dedup, secret scrub, sharding
  frequency.py      # parallel multiprocess counting
  selection.py      # coverage-based + quality-filtered token selection
  pua.py            # Private-Use-Area codepoint allocator
  pretokenizer.py   # CUTEPreTokenizer (Aho-Corasick + identifier splitting)
  trainer.py        # build_cute() — orchestrates the full pipeline
  decode.py         # PUA-aware reverse substitution
  tokenizer.py      # CUTETokenizerFast (PreTrainedTokenizerFast)
  manifest.py       # build manifest for reproducibility
  cli.py            # `cute build`, `cute roundtrip-check`, `cute info`

tests/
  unit/             # ~140 unit tests
  property/         # Hypothesis round-trip tests
  integration/      # full pipeline E2E

benchmarks/
  compression.py    # CUTE vs tiktoken/GPT-2/CodeLlama
  latency.py        # encode/decode μs per KB

⚙️ Configuration

from cute_tokenizer import CUTEConfig, build_cute

config = CUTEConfig(
    vocab_size=80_000,        # total token IDs
    coverage_target=0.90,     # PUA coverage of total frequency
    max_token_len=50,         # ignore tokens longer than this
    boost_weight=0.3,         # identifier sub-part boost
    min_bpe_budget=8_000,     # minimum learnable merges
    seed=42,                  # determinism
    workers=0,                # 0 = os.cpu_count()
    enable_secret_scrub=True, # drop files containing API keys etc.
)
build_cute("./corpus", "./output", config)

🧪 Testing

pip install -e .[dev]
pytest tests/unit          # fast unit tests
pytest tests/property      # Hypothesis round-trip
pytest tests/integration   # full E2E build (slower)
pytest --cov=cute_tokenizer

The Hypothesis suite runs ~600+ generated test cases per round-trip property, plus a hand-picked corner-case parametrize covering: empty strings, BOM, ZWJ emoji, control chars, multi-script text, deep underscores, and more.


🔐 Production Hardening

  • Determinism: same corpus + config → same vocab hash. Verified by tests/integration/test_determinism.py.
  • Secret scrubbing: corpus files matching AWS/OpenAI/Anthropic/GitHub key patterns are dropped before vocab construction.
  • Build manifest: every build emits build_manifest.json recording config, corpus hash, vocab hash, library versions, and timing.
  • PUA collision detection: codepoints found in the corpus are skipped during assignment, so user content cannot be confused with our injection.
  • Type-checked: mypy --strict clean.
  • Lint clean: ruff check and ruff format.

📊 Benchmarks

python -m benchmarks.compression --tokenizer ./output --holdout ./holdout
python -m benchmarks.latency --tokenizer ./output

Expected (on a 100 GB Python/TS holdout):

Metric CUTE vs byte-level BPE
Sequence length (mean) 35–45% shorter
Sequence length (p95) 30–40% shorter
Sequence length (p99) 25–35% shorter
Bytes per token (mean) 📈 +50–70%
Round-trip correctness 100% (Hypothesis-verified)
Training throughput (LLM) +25–35%
Inference latency (LLM) −25–40%
API token cost 💰 −30–45%
KV-cache memory at inference 💾 −35–45%
Effective context window (text per token) 📏 +55–80%
Encode latency (tokenizer itself) 🐢 ~1.5× tiktoken (Python pre-tok overhead)

Run the benchmarks on your own corpus to see numbers for your distribution.


🐭 Why a Mouse?

A mouse is small, fast, and nibbles things to size. CUTE quietly chews through your token bill while you focus on the model. The cheese is the 30–45% cost reduction.


📜 License

MIT. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

cute_tokenizer-0.1.1.tar.gz (711.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

cute_tokenizer-0.1.1-py3-none-any.whl (31.0 kB view details)

Uploaded Python 3

File details

Details for the file cute_tokenizer-0.1.1.tar.gz.

File metadata

  • Download URL: cute_tokenizer-0.1.1.tar.gz
  • Upload date:
  • Size: 711.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: Hatch/1.16.5 cpython/3.13.1 HTTPX/0.28.1

File hashes

Hashes for cute_tokenizer-0.1.1.tar.gz
Algorithm Hash digest
SHA256 6d71b502643a214b3fb68f97004001d37da31800cfe49bdccd689062408452aa
MD5 c44214900790667beef17eafdc558f8c
BLAKE2b-256 695fee0ca5c0b8374848b64786841175326f0c76b166a962af3ebf0b0c627a2f

See more details on using hashes here.

File details

Details for the file cute_tokenizer-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: cute_tokenizer-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 31.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: Hatch/1.16.5 cpython/3.13.1 HTTPX/0.28.1

File hashes

Hashes for cute_tokenizer-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 9ca3cc7f8fe7d03738844c32c7c77b7657243a1195fb906a65a471edc80b5cd1
MD5 d4aaa0413cd63f4aca263b365bde2f35
BLAKE2b-256 f8dfaf80b1b0da0661eaa4c50d2c7773670b7179e4a142a48a5a4385269d1a30

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page