Calculate entropy-based linguistic metrics on text using reference corpora
Project description
entroprisal
Calculate information theoretic linguistic metrics on text using reference corpora.
Overview
entroprisal is a Python package that computes various entropy and surprisal metrics for text analysis. It provides three main calculators:
- TokenEntropisalCalculator: Token-level n-gram entropy and surprisal
- CharacterEntropisalCalculator: Character-level entropy and surprisal
- RestOfWordEntropisalCalculator: Character-level rest-of-word entropy and surprisal (bidirectional: left-to-right and right-to-left word completion)
These metrics are useful for analyzing text complexity, readability, and information content.
Installation
Basic Installation
pip install entroprisal[all]
The package will automatically download reference data files from Hugging Face Hub when first used (~4GiB total).
SpaCy and Hugging Face Hub are optional dependencies for additional functionality. A minimal installation without these dependencies is also possible:
pip install entroprisal
Optional Dependencies included in all
huggingface-hub is used for faster downloads with caching (recommended)
spacy is used for classifying content words vs. function words in your target text and for tokenization.
If using SpaCy, you will need to download a SpaCy language model as well:
python -m spacy download en_core_web_lg
Development Installation
# Clone the repository
git clone https://github.com/learlab/entroprisal.git
cd entroprisal
# Install in editable mode with dev dependencies
uv pip install -e .[dev]
Data Files
Reference corpus files are automatically downloaded from Hugging Face Hub on first use:
google-books-dictionary-words.txt- Word frequencies (included in package)4grams_aw.parquet- All-word 4-gram frequencies (~2GiB)4grams_cw.parquet- Content-word 4-gram frequencies (~1.8GiB)
Files are cached locally to avoid re-downloading. To use the faster Hugging Face Hub downloader with resume capability, install with pip install entroprisal[hf].
Quick Start
Text Preprocessing
For best results, preprocess your text using the preprocess_text() function, which uses spaCy for tokenization. This ensures consistency with how the reference corpora were prepared.
from entroprisal import preprocess_text
# Preprocess text (requires spaCy: pip install entroprisal[spacy])
text = "The quick brown fox jumps over the lazy dog."
tokens = preprocess_text(text)
# [['the', 'quick', 'brown', 'fox', 'jumps', 'over', 'the', 'lazy', 'dog']]
# For content-word-only analysis (nouns, verbs, adjectives, adverbs)
content_tokens = preprocess_text(text, content_words_only=True)
# [['quick', 'brown', 'fox', 'jumps', 'lazy', 'dog']]
Token-Level Entropy and Surprisal
from entroprisal import TokenEntropisalCalculator
from entroprisal.utils import load_4grams
# Load reference n-gram data
ngrams = load_4grams("aw") # "aw" = all words, "cw" = content words
# Initialize calculator
calc = TokenEntropisalCalculator(ngrams, min_frequency=100)
# Calculate metrics for a list of tokens
tokens = ["the", "quick", "brown", "fox"]
metrics = calc.calculate_metrics(tokens)
print(metrics)
# Output includes:
# - ngram_surprisal_1, ngram_surprisal_2, ngram_surprisal_3
# - ngram_entropy_1, ngram_entropy_2, ngram_entropy_3
# - Support counts for each metric
Character-Level Entropy and Surprisal
from entroprisal import CharacterEntropisalCalculator
from entroprisal.utils import load_google_books_words
# Load word frequency data
words_df = load_google_books_words()
# Initialize calculator
calc = CharacterEntropisalCalculator(words_df)
# Calculate metrics for text
text = "The quick brown fox jumps over the lazy dog"
metrics = calc.calculate_metrics(text)
print(metrics)
# Output includes:
# - char_entropy, char_surprisal: Single character transition metrics
# - bigraph_entropy, bigraph_surprisal: Two-character context metrics
# - trigraph_entropy, trigraph_surprisal: Three-character context metrics
Rest-of-Word Entropy and Surprisal (Character-Level, Bidirectional)
from entroprisal import RestOfWordEntropisalCalculator
from entroprisal.utils import load_google_books_words
# Load word frequency data
words_df = load_google_books_words()
# Initialize calculator
calc = RestOfWordEntropisalCalculator(words_df)
# Calculate metrics for text
text = "The quick brown fox"
metrics = calc.calculate_metrics(text)
print(metrics)
# Output includes:
# - lr_c1_entropy, lr_c1_surprisal: Left-to-right, 1-char context
# - lr_c2_entropy, lr_c2_surprisal: Left-to-right, 2-char context
# - lr_c3_entropy, lr_c3_surprisal: Left-to-right, 3-char context
# - rl_c1_entropy, rl_c1_surprisal: Right-to-left, 1-char context
# - rl_c2_entropy, rl_c2_surprisal: Right-to-left, 2-char context
# - rl_c3_entropy, rl_c3_surprisal: Right-to-left, 3-char context
# - mean_word_length
Batch Processing
All calculators support batch processing:
# Process multiple texts at once
texts = [
"First text sample",
"Second text sample",
"Third text sample"
]
# Returns a pandas DataFrame with one row per text
results_df = calc.calculate_batch(texts)
print(results_df)
API Reference
TokenEntropisalCalculator
Calculate token-level entropy and surprisal metrics using n-gram frequencies.
Methods:
calculate_metrics(tokens: List[str]) -> Dict[str, float]: Calculate metrics for a token listcalculate_batch(token_lists: List[List[str]]) -> pd.DataFrame: Batch processingget_detailed_ngram_analysis(tokens: List[str]) -> Dict[int, pd.DataFrame]: Detailed per-token analysis
CharacterEntropisalCalculator
Calculate character-level transition entropy and surprisal.
Methods:
calculate_metrics(text: str) -> Dict[str, float]: Calculate metrics for textcalculate_batch(texts: List[str]) -> pd.DataFrame: Batch processingget_character_entropy(char: str) -> Optional[float]: Lookup entropy for specific characterget_character_surprisal(context: str, target: str) -> Optional[float]: Lookup surprisal for character transitionget_bigraph_entropy(bigraph: str) -> Optional[float]: Lookup entropy for bigraphget_bigraph_surprisal(bigraph: str) -> Optional[float]: Lookup surprisal for bigraphget_trigraph_entropy(trigraph: str) -> Optional[float]: Lookup entropy for trigraphget_trigraph_surprisal(trigraph: str) -> Optional[float]: Lookup surprisal for trigraph
RestOfWordEntropisalCalculator
Calculate character-level rest-of-word entropy and surprisal in both directions (predicting remaining characters from left-to-right and right-to-left contexts).
Methods:
calculate_metrics(text: str) -> Dict[str, float]: Calculate metrics for textcalculate_batch(texts: List[str]) -> pd.DataFrame: Batch processingget_word_frequency(word: str) -> int: Get frequency of a word in reference corpus
Utilities
from entroprisal.utils import (
load_google_books_words,
load_4grams,
get_data_dir,
preprocess_text,
is_content_token
)
# Load reference data
words_df = load_google_books_words()
ngrams_aw = load_4grams("aw")
ngrams_cw = load_4grams("cw")
# Get data directory path
data_dir = get_data_dir()
# Preprocess text with spaCy tokenization
# Returns list of token lists (one per document)
tokens = preprocess_text("The quick brown fox jumps over the lazy dog.")
# [['the', 'quick', 'brown', 'fox', 'jumps', 'over', 'the', 'lazy', 'dog']]
# Process multiple texts
texts = ["First sentence.", "Second sentence."]
token_lists = preprocess_text(texts)
# Extract only content words (nouns, verbs, adjectives, adverbs)
content_tokens = preprocess_text("The quick brown fox jumps.", content_words_only=True)
# [['quick', 'brown', 'fox', 'jumps']] # 'the' filtered out
# Use a different spaCy model
tokens = preprocess_text("Some text", spacy_model_tag="en_core_web_sm")
Examples
See examples/usage_examples.ipynb for comprehensive examples including:
- Loading and initializing calculators
- Processing single texts and batches
- Combining multiple metrics
- Visualizing results
Development
Running Tests
pytest tests/
Code Style
# Format code
black src/
# Lint code
ruff check src/
License
It's MIT licensed. Do what you want with it.
Citation
On the other hand, if you are an academic, please cite the package as follows:
@software{entroprisal,
title = {entroprisal: Entropy-based linguistic metrics},
author = {Langdon Holmes and Scott Crossley},
year = {2025},
url = {https://github.com/learlab/entroprisal}
}
Holmes, L., & Crossley, S. (2025). entroprisal: Entropy-based linguistic metrics [Computer software].
Acknowledgments
Reference data sources:
- Google Books word frequencies: gwordlist
- N-gram token frequencies: Derived from the slimpajama test set slimpajama
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file entroprisal-0.2.0.tar.gz.
File metadata
- Download URL: entroprisal-0.2.0.tar.gz
- Upload date:
- Size: 204.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e2dd079d8ca2cfacf29abb396103145eee3824315ad7f561a28975675e37abe7
|
|
| MD5 |
f5b26747701e2d2e6167df6df7ca5a04
|
|
| BLAKE2b-256 |
c3fb970c481dc274004207072306e9f3ac84962cd4e0a491c5ce36e8a477fbb4
|
File details
Details for the file entroprisal-0.2.0-py3-none-any.whl.
File metadata
- Download URL: entroprisal-0.2.0-py3-none-any.whl
- Upload date:
- Size: 20.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.11
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
991ed234c676d435bbfa6eff155fad42fa8083ea19d0938c8a295f49c8471d67
|
|
| MD5 |
65891762e9c0fc8b0b17f8bde6bb582e
|
|
| BLAKE2b-256 |
569be617c8deb635c0d8a4352ea270327c2e788c25d14eedd269e02be1f61ae7
|