Count number of tokens in text files using tiktoken tokenizer from OpenAI
Project description
Count tokens
A versatile tool for counting tokens in text files, directories, and strings with support for streaming large files, batching, and more.
Table of Contents
- Count tokens
Requirements
This package is using tiktoken library for tokenization.
Installation
For usage from command line install the package in isolated environment with pipx:
pipx install count-tokens
or run it with uv without installing:
uvx count-tokens document.txt
or install it in your current environment with pip.
pip install count-tokens
Usage
Basic Usage
Open terminal and run:
count-tokens document.txt
You should see something like this:
File: document.txt
Encoding: cl100k_base
Number of tokens: 67
if you want to see just the tokens count run:
count-tokens document.txt --quiet
and the output will be:
67
To use count-tokens with other than default cl100k_base encoding use the additional input argument -e or --encoding:
count-tokens document.txt -e r50k_base
NOTE: tiktoken supports three encodings used by OpenAI models:
| Encoding name | OpenAI models |
|---|---|
o200k_base |
gpt-4o, gpt-4o-mini |
cl100k_base |
gpt-4, gpt-3.5-turbo, text-embedding-ada-002 |
p50k_base |
Codex models, text-davinci-002, text-davinci-003 |
r50k_base (or gpt2) |
GPT-3 models like davinci |
(source: OpenAI Cookbook)
Directory Processing
Process all files in a directory matching specific patterns:
count-tokens -d ./docs -p "*.md,*.txt"
If -p is not specified, the default patterns are *.txt,*.py,*.md.
Process directories recursively:
count-tokens -d ./project -r -p "*.py"
Large File Support
Use streaming mode for large files to avoid memory issues:
count-tokens large_file.txt --stream
Customize chunk size for streaming (default is 1MB):
count-tokens large_file.txt --stream --chunk-size 2097152
Output Formats
Get results in different formats:
# JSON format
count-tokens -d ./docs -p "*.md" --format json
# CSV format
count-tokens -d ./docs -p "*.md" --format csv
Token Limit Checking
Check if files exceed a specific token limit:
count-tokens document.txt --max-tokens 4096
When files exceed the limit, you'll see a warning:
File: document.txt
Encoding: cl100k_base
⚠️ Token limit exceeded: 5120 > 4096
Number of tokens: 5120
Approximate number of tokens
In case you need the results a bit faster and you don't need the exact number of tokens you can use the --approx parameter with w to have approximation based on number of words or c to have approximation based on number of characters.
count-tokens document.txt --approx w
It is based on assumption that there is 4/3 (1 and 1/3) tokens per word and 4 characters per token.
Adjusting estimation rules
You can customize the rules used for token estimation by adjusting the default values for tokens per word and characters per token ratios:
# Adjust the tokens per word ratio (default is 1.33)
count-tokens document.txt --approx w --tokens-per-word 1.5
# Adjust the characters per token ratio (default is 4.0)
count-tokens document.txt --approx c --characters-per-token 3.5
These options allow you to fine-tune the approximation based on your specific content characteristics.
Programmatic usage
Simple API
The package now provides a simplified API for all token counting operations:
from count_tokens import count
# Count tokens in a string
result = count(text="This is a string")
# Count tokens in a file
result = count(file="document.txt", encoding="cl100k_base")
# Count tokens with approximation
result = count(file="document.txt", approximate="w", tokens_per_word=1.5)
Directory Processing
Process all files in a directory that match specific patterns:
from count_tokens import count
# Process a directory
results = count(
directory="./docs",
file_patterns=["*.md", "*.txt"],
recursive=True
)
# Print results
for file_path, token_count in results.items():
print(f"{file_path}: {token_count} tokens")
Streaming Large Files
Process large files without loading the entire file into memory:
from count_tokens import count
# Process a large file with streaming
tokens = count(
file="large_dataset.txt",
use_streaming=True,
chunk_size=1024*1024 # 1MB chunks
)
Check Token Limits
Check if content exceeds token limits:
from count_tokens import count
# Check if a file exceeds token limit
result = count(file="document.txt", max_tokens=4096)
if isinstance(result, dict) and result.get("limit_exceeded"):
print(f"⚠️ Token limit exceeded: {result['tokens']} > {result['max_tokens']}")
Original API
The original functions are still available for backward compatibility:
from count_tokens.count import count_tokens_in_file, count_tokens_in_string
# Count tokens in a file
num_tokens = count_tokens_in_file("document.txt")
# Count tokens in a string
num_tokens = count_tokens_in_string("This is a string.")
# Use specific encoding
num_tokens = count_tokens_in_string("This is a string.", encoding_name="cl100k_base")
# Word-based approximation with custom tokens per word ratio
num_tokens = count_tokens_in_file("document.txt", approximate="w", tokens_per_word=1.5)
# Character-based approximation with custom characters per token ratio
num_tokens = count_tokens_in_file("document.txt", approximate="c", characters_per_token=3.5)
Related Projects
Credits
Thanks to the authors of the tiktoken library for open sourcing their work.
License
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file count_tokens-0.8.1.tar.gz.
File metadata
- Download URL: count_tokens-0.8.1.tar.gz
- Upload date:
- Size: 40.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
cd68d1698cd58124bbb889db75885a51fbab152500070e6c5132660452630f44
|
|
| MD5 |
61554bb95c874f4d3339ecb27f9dbc00
|
|
| BLAKE2b-256 |
40765a9892dcb8b2e2b706d3a097a9e60f5ed7f3a1c48d78e9081a6ccf3147fa
|
File details
Details for the file count_tokens-0.8.1-py3-none-any.whl.
File metadata
- Download URL: count_tokens-0.8.1-py3-none-any.whl
- Upload date:
- Size: 9.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4e8ee9091ac16c11d66fa90a54bef01a66abb3db53d2a7470bc32c69a96b325b
|
|
| MD5 |
7145bff404c2fb1dee0012e1d4c74f03
|
|
| BLAKE2b-256 |
1c40c0b341e41fbd5de2d266302bd2a48fcdf2b1a44152efccddf6723893abf3
|