Evaluate the quality of texts from a range of 0-1
Project description
RaRa Text Evaluator
rara-text-evaluator is a Python library for evaluating the quality of text using n-gram models.
✨ Features
- Evaluate text quality with either pre-built models or create your own.
- Build and train n-gram models for text quality evaluation.
- Pre-trained models for Estonian, English, and a language-agnostic fallback.
- Easy to extend for other languages or corpora.
⚡ Quick Start
Get started with rara-text-evaluator in just a few steps:
-
Install the Package
Ensure you're using Python 3.10 or above, then run:pip install rara-text-evaluator
-
Import and Use
Example usage to evaluate text:from rara_text_evaluator.quality_evaluator import QualityEvaluator evaluator = QualityEvaluator() example_text = "Some text here that is over 30 characters long." score = evaluator.get_probability(example_text) is_valid = evaluator.is_valid(example_text) print(f"Text Quality Score: {score}") print(f"Text is valid: {is_valid}")
💡 Important to note
- Texts shorter than 30 characters will result in probability score 0.0 by default. Both the minimum length and the default response can be configured with parameters
length_limitanddefault_response. See documentation to learn more.
⚙️ Installation Guide
Follow the steps below to install the rara-text-evaluator package, either via pip or locally.
Installation via pip
Click to expand
-
Set Up Your Python Environment
Create or activate a Python environment using Python 3.10 or above. -
Install the Package
Run the following command:pip install rara-text-evaluator
Local Installation
Follow these steps to install the rara-text-evaluator package locally:
Click to expand
-
Clone the Repository
Clone the repository and navigate into it:git clone <repository-url> cd <repository-directory>
-
Install Git LFS
Ensure you have Git LFS installed and initialized:git lfs install
-
Pull Git LFS Files
Retrieve the large files tracked by Git LFS:git lfs pull
-
Set Up Python Environment
Create or activate a Python environment using Python 3.10 or above. E.g:conda create -n py310 python==3.10 conda activate py310
-
Install Build Package
Install thebuildpackage to enable local builds:pip install build
-
Build the Package
Run the following command inside the repository:python -m build
-
Install the Package
Install the built package locally:pip install .
🚀 Testing Guide
Follow these steps to test the rara-text-evaluator package.
How to Test
Click to expand
-
Clone the Repository
Clone the repository and navigate into it:git clone <repository-url> cd <repository-directory>
-
Install Git LFS
Ensure Git LFS is installed and initialized:git lfs install
-
Pull Git LFS Files
Retrieve the large files tracked by Git LFS:git lfs pull
-
Set Up Python Environment
Create or activate a Python environment using Python 3.10 or above. -
Install Build Package
Install thebuildpackage:pip install build
-
Build the Package
Build the package inside the repository:python -m build
-
Install with Testing Dependencies
Install the package along with its testing dependencies:pip install .[testing]
-
Run Tests
Run the test suite from the repository root:python -m pytest -v tests
📝 Documentation
Documentation can be found here.
🌍 Supported Models
The QualityEvaluator class leverages language-specific pre-built models to assess the quality of provided text. It also supports an automatic fallback model for cases where the specified or detected language is not supported.
Built-in Models
The package has built-in support for the following languages:
- Estonian
- English
- Language-agnostic fallback model
The table below provides details on the corpora used for training each model:
| Language | Model Name | Corpora | Words | Characters |
|---|---|---|---|---|
| Estonian | text_validator_ngram_3_et.pkl |
DIGAR "born digital" articles | 4,164,975 | 30,630,998 |
| English | text_validator_ngram_3_en.pkl |
NLTK corpora (gutenberg, brown, reuters, webtext) | 5,900,439 | 28,649,578 |
| Language-agnostic | text_validator_ngram_3_fallback.pkl |
Combined DIGAR and NLTK corpora | 10,065,413 | 59,280,576 |
Additional Notes
- Automatic Fallback: If the target language isn't explicitly supported, the language-agnostic fallback model will be used.
- N-Gram: All models currently use a trigrams (
n=3) approach to evaluate text quality.
🔍 More Usage Examples
This section provides additional examples of possible usage and highlights the roles of some parameters.
Impact of parameters length_limit and default_response
Click to expand
from rara_text_evaluator.quality_evaluator import QualityEvaluator
evaluator = QualityEvaluator()
text = "This is a valid text."
text_length = len(text)
score = evaluator.get_probability(text)
print(f"Text length: {text_length} characters")
print(f"Quality score: {score}")
Output:
Text length: 21 characters
Quality score: 0.0
As the default length_limit param is set to 30, the output score is automatically set to the default value of default_response (0.0).
However, we can modify it to allow shorter texts as well:
from rara_text_evaluator.quality_evaluator import QualityEvaluator
evaluator = QualityEvaluator()
text = "This is a valid text."
text_length = len(text)
score = evaluator.get_probability(text, length_limit=20)
print(f"Text length: {text_length} characters")
print(f"Quality score: {score}")
Output:
Text length: 21 characters
Quality score: 0.7611294876594459
As we see, the method now returns a real quality score, it is just a little bit lower than expected considering the text is actually completely valid. This is why the cut-off was added in the first place - so we can distinguish between texts that actually have low quality opposed to just being short.
Setting thresholds for binary evaluation
Click to expand
Let's first inspect the results with default thresholds:
from rara_text_evaluator.quality_evaluator import QualityEvaluator
evaluator = QualityEvaluator()
text_en = "This is more or lesh valihd text but coneins some mistakes."
text_et = "See tekst sishaldap mõnet väjkeset vead, mis võib-olla on okei."
score_en = evaluator.get_probability(text_en)
score_et = evaluator.get_probability(text_et)
is_valid_en = evaluator.is_valid(text_en)
is_valid_et = evaluator.is_valid(text_et)
print(f"text_en is valid: {is_valid_en} (score = {score_en}.")
print(f"text_et is valid: {is_valid_et} (score = {score_et}.")
print(f"Current thresholds for validity:{evaluator.thresholds}")
Output:
text_en is valid: True (score = 0.8382394549211963.
text_et is valid: True (score = 0.7840549033157463.
Current thresholds for validity:{'et': 0.7, 'en': 0.7, 'fallback': 0.7}
As we can see, both English and Estonian example texts pass the validity check with default thresholds. Let's assume that we want a lot higher quality for our texts in Estonian and set the threshold for that language higher:
from rara_text_evaluator.quality_evaluator import QualityEvaluator
evaluator = QualityEvaluator()
text_en = "This is more or lesh valihd text but coneins some mistakes."
text_et = "See tekst sishaldap mõnet väjkeset vead, mis võib-olla on okei."
# Let's set higher threshold for Estonian
evaluator.set_threshold(lang="et", threshold=0.9)
score_en = evaluator.get_probability(text_en)
score_et = evaluator.get_probability(text_et)
is_valid_en = evaluator.is_valid(text_en)
is_valid_et = evaluator.is_valid(text_et)
print(f"text_en is valid: {is_valid_en} (score = {score_en}.")
print(f"text_et is valid: {is_valid_et} (score = {score_et}.")
print(f"Current thresholds for validity:{evaluator.thresholds}")
Output:
text_en is valid: True (score = 0.8382394549211963.
text_et is valid: False (score = 0.7840549033157463.
Current thresholds for validity:{'et': 0.9, 'en': 0.7, 'fallback': 0.7}
Building and applying a custom language model
Click to expand
from rara_text_evaluator.ngram_model_builder import NgramModelBuilder
from rara_text_evaluator.quality_evaluator import QualityEvaluator
# Setting class instance parameters
# and creating the class instance
n_gram = 2
language = "klingon"
accepted_chars ="jilnrguhqtmpseybvwaocd.,:;-_!\"()%@1234567890' "
nmb = NgramModelBuilder(n=n_gram, lang=language, accepted_chars=accepted_chars)
# Training and saving the model
#NB! This is just a dummy example! You should use a much bigger corpus!
text_corpus = "'ach cha'logh nItebHa' 'ej mIw vIghoS 'e' vIghoS, 'each vIghoS 'ej cha'logh vIghoS"
model_path = "klingon_ngram.pkl"
nmb.build_model(text_corpus)
nmb.save_model(model_path)
# Using the new model via QualityEvaluator instance
evaluator = QualityEvaluator()
evaluator.add_model(lang="klingon", model_path="klingon_ngram.pkl")
# NB! For custom languages not supported by langdetect,
# it is paramount to pass it along with `lang` parameter!
score = evaluator.get_probability(
text="vaj nItebHa'vam vIghoS,",
lang="klingon",
length_limit=10
)
print(score)
Output:
0.99999999999107
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file rara_text_evaluator-1.1.1.tar.gz.
File metadata
- Download URL: rara_text_evaluator-1.1.1.tar.gz
- Upload date:
- Size: 1.3 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1e88d40783fbe5b6f6389fdca7fa2507d6c7ef18c88d786c1b24a017d8908a6a
|
|
| MD5 |
38db074d97710b3343f75870ba9e2c91
|
|
| BLAKE2b-256 |
9aabefa0a1b4ea10d24b0e19f737a73aa3f3967b1cbfb671abc0d32ef4fac5e7
|
File details
Details for the file rara_text_evaluator-1.1.1-py3-none-any.whl.
File metadata
- Download URL: rara_text_evaluator-1.1.1-py3-none-any.whl
- Upload date:
- Size: 1.3 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.0.1 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2e1384fbbd6b1ae832daddb3ec0e2425381575acafd3e841f0f296547097137c
|
|
| MD5 |
afb8fa029d4f4d5aa919d5ad20b1690f
|
|
| BLAKE2b-256 |
adbd9b6a96f5734ff7b94c7fb1862c50a200e3c77c0b400926ba0556076dffed
|