Text tokenizers.
Project description
totokenizers
A model-agnostic library to encode text into tokens and couting them using different tokenizers.
install
pip install totokenizers
usage
from totokenizers.factories import TotoModelInfo, Totokenizer
mdoel = "openai/gpt-3.5-turbo-0613"
desired_max_tokens = 250
tokenizer = Totokenizer.from_model(model)
model_info = TotoModelInfo.from_model(model)
thread_length = tokenizer.count_chatml_tokens(thread, functions)
if thread_length + desired_max_tokens > model_info.max_tokens:
raise YourException(thread_length, desired_max_tokens, model_info.max_tokens)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
totokenizers-1.2.5.1.tar.gz
(755.2 kB
view hashes)
Built Distribution
totokenizers-1.2.5.1-py3-none-any.whl
(765.7 kB
view hashes)
Close
Hashes for totokenizers-1.2.5.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 756c8d5e58fd7affd2efec0289dca849ff953dffc8336410db71586ab06a1cd9 |
|
MD5 | 5b13069539d5f3593a0ddd9674adf7ba |
|
BLAKE2b-256 | e18e4c0c0a84a122fc4c00bbc1c0d9b31468f996575917a4b10c31862659c284 |