Skip to main content

Evaluate the quality of texts from a range of 0-1

Project description

RaRa Text Evaluator

Py3.10 Py3.11 Py3.12

rara-text-evaluator is a Python library for evaluating the quality of text using n-gram models.


✨ Features

  • Evaluate text quality with either pre-built models or create your own.
  • Build and train n-gram models for text quality evaluation.
  • Pre-trained models for Estonian, English, and a language-agnostic fallback.
  • Easy to extend for other languages or corpora.

⚡ Quick Start

Get started with rara-text-evaluator in just a few steps:

  1. Install the Package
    Ensure you're using Python 3.10 or above, then run:

    pip install rara-text-evaluator
    
  2. Import and Use
    Example usage to evaluate text:

    from rara_text_evaluator.quality_evaluator import QualityEvaluator
    
    evaluator = QualityEvaluator()
    
    example_text = "Some text here that is over 30 characters long."
    score = evaluator.get_probability(example_text)
    is_valid = evaluator.is_valid(example_text)
    
    print(f"Text Quality Score: {score}")
    print(f"Text is valid: {is_valid}")
    

💡 Important to note

  • Texts shorter than 30 characters will result in probability score 0.0 by default. Both the minimum length and the default response can be configured with parameters length_limit and default_response. See documentation to learn more.

⚙️ Installation Guide

Follow the steps below to install the rara-text-evaluator package, either via pip or locally.


Installation via pip

Click to expand
  1. Set Up Your Python Environment
    Create or activate a Python environment using Python 3.10 or above.

  2. Install the Package
    Run the following command:

    pip install rara-text-evaluator
    

Local Installation

Follow these steps to install the rara-text-evaluator package locally:

Click to expand
  1. Clone the Repository
    Clone the repository and navigate into it:

    git clone <repository-url>
    cd <repository-directory>
    
  2. Install Git LFS
    Ensure you have Git LFS installed and initialized:

    git lfs install
    
  3. Pull Git LFS Files
    Retrieve the large files tracked by Git LFS:

    git lfs pull
    
  4. Set Up Python Environment
    Create or activate a Python environment using Python 3.10 or above. E.g:

    conda create -n py310 python==3.10
    conda activate py310
    
  5. Install Build Package
    Install the build package to enable local builds:

    pip install build
    
  6. Build the Package
    Run the following command inside the repository:

    python -m build
    
  7. Install the Package
    Install the built package locally:

    pip install .
    

🚀 Testing Guide

Follow these steps to test the rara-text-evaluator package.

How to Test

Click to expand
  1. Clone the Repository
    Clone the repository and navigate into it:

    git clone <repository-url>
    cd <repository-directory>
    
  2. Install Git LFS
    Ensure Git LFS is installed and initialized:

    git lfs install
    
  3. Pull Git LFS Files
    Retrieve the large files tracked by Git LFS:

    git lfs pull
    
  4. Set Up Python Environment
    Create or activate a Python environment using Python 3.10 or above.

  5. Install Build Package
    Install the build package:

    pip install build
    
  6. Build the Package
    Build the package inside the repository:

    python -m build
    
  7. Install with Testing Dependencies
    Install the package along with its testing dependencies:

    pip install .[testing]
    
  8. Run Tests
    Run the test suite from the repository root:

    python -m pytest -v tests
    

📝 Documentation

Documentation can be found here.

🌍 Supported Models

The QualityEvaluator class leverages language-specific pre-built models to assess the quality of provided text. It also supports an automatic fallback model for cases where the specified or detected language is not supported.

Built-in Models

The package has built-in support for the following languages:

  • Estonian
  • English
  • Language-agnostic fallback model

The table below provides details on the corpora used for training each model:

Language Model Name Corpora Words Characters
Estonian text_validator_ngram_3_et.pkl DIGAR "born digital" articles 4,164,975 30,630,998
English text_validator_ngram_3_en.pkl NLTK corpora (gutenberg, brown, reuters, webtext) 5,900,439 28,649,578
Language-agnostic text_validator_ngram_3_fallback.pkl Combined DIGAR and NLTK corpora 10,065,413 59,280,576

Additional Notes

  • Automatic Fallback: If the target language isn't explicitly supported, the language-agnostic fallback model will be used.
  • N-Gram: All models currently use a trigrams (n=3) approach to evaluate text quality.

🔍 More Usage Examples

This section provides additional examples of possible usage and highlights the roles of some parameters.

Impact of parameters length_limit and default_response

Click to expand
from rara_text_evaluator.quality_evaluator import QualityEvaluator

evaluator = QualityEvaluator()

text = "This is a valid text."
text_length = len(text)

score = evaluator.get_probability(text)

print(f"Text length: {text_length} characters")
print(f"Quality score: {score}")

Output:

Text length: 21 characters
Quality score: 0.0

As the default length_limit param is set to 30, the output score is automatically set to the default value of default_response (0.0).

However, we can modify it to allow shorter texts as well:

from rara_text_evaluator.quality_evaluator import QualityEvaluator

evaluator = QualityEvaluator()

text = "This is a valid text."
text_length = len(text)

score = evaluator.get_probability(text, length_limit=20)

print(f"Text length: {text_length} characters")
print(f"Quality score: {score}")

Output:

Text length: 21 characters
Quality score: 0.7611294876594459

As we see, the method now returns a real quality score, it is just a little bit lower than expected considering the text is actually completely valid. This is why the cut-off was added in the first place - so we can distinguish between texts that actually have low quality opposed to just being short.

Setting thresholds for binary evaluation

Click to expand

Let's first inspect the results with default thresholds:

from rara_text_evaluator.quality_evaluator import QualityEvaluator

evaluator = QualityEvaluator()

text_en = "This is more or lesh valihd text but coneins some mistakes."
text_et = "See tekst sishaldap mõnet väjkeset vead, mis võib-olla on okei."

score_en = evaluator.get_probability(text_en)
score_et = evaluator.get_probability(text_et)

is_valid_en = evaluator.is_valid(text_en)
is_valid_et = evaluator.is_valid(text_et)

print(f"text_en is valid: {is_valid_en} (score = {score_en}.")
print(f"text_et is valid: {is_valid_et} (score = {score_et}.")
print(f"Current thresholds for validity:{evaluator.thresholds}")

Output:

text_en is valid: True (score = 0.8382394549211963.
text_et is valid: True (score = 0.7840549033157463.
Current thresholds for validity:{'et': 0.7, 'en': 0.7, 'fallback': 0.7}

As we can see, both English and Estonian example texts pass the validity check with default thresholds. Let's assume that we want a lot higher quality for our texts in Estonian and set the threshold for that language higher:

from rara_text_evaluator.quality_evaluator import QualityEvaluator

evaluator = QualityEvaluator()

text_en = "This is more or lesh valihd text but coneins some mistakes."
text_et = "See tekst sishaldap mõnet väjkeset vead, mis võib-olla on okei."

# Let's set higher threshold for Estonian
evaluator.set_threshold(lang="et", threshold=0.9)

score_en = evaluator.get_probability(text_en)
score_et = evaluator.get_probability(text_et)

is_valid_en = evaluator.is_valid(text_en)
is_valid_et = evaluator.is_valid(text_et)

print(f"text_en is valid: {is_valid_en} (score = {score_en}.")
print(f"text_et is valid: {is_valid_et} (score = {score_et}.")

print(f"Current thresholds for validity:{evaluator.thresholds}")

Output:

text_en is valid: True (score = 0.8382394549211963.
text_et is valid: False (score = 0.7840549033157463.
Current thresholds for validity:{'et': 0.9, 'en': 0.7, 'fallback': 0.7}

Building and applying a custom language model

Click to expand
from rara_text_evaluator.ngram_model_builder import NgramModelBuilder
from rara_text_evaluator.quality_evaluator import QualityEvaluator

# Setting class instance parameters
# and creating the class instance
n_gram = 2
language = "klingon"
accepted_chars ="jilnrguhqtmpseybvwaocd.,:;-_!\"()%@1234567890' "

nmb = NgramModelBuilder(n=n_gram, lang=language, accepted_chars=accepted_chars)

# Training and saving the model

#NB! This is just a dummy example! You should use a much bigger corpus!
text_corpus = "'ach cha'logh nItebHa' 'ej mIw vIghoS 'e' vIghoS, 'each vIghoS 'ej cha'logh vIghoS"
model_path = "klingon_ngram.pkl"

nmb.build_model(text_corpus)
nmb.save_model(model_path)

# Using the new model via QualityEvaluator instance

evaluator = QualityEvaluator()
evaluator.add_model(lang="klingon", model_path="klingon_ngram.pkl")

# NB! For custom languages not supported by langdetect,
# it is paramount to pass it along with `lang` parameter!
score = evaluator.get_probability(
    text="vaj nItebHa'vam vIghoS,",
    lang="klingon",
    length_limit=10
)
print(score)

Output:

0.99999999999107

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rara_text_evaluator-1.1.5.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

rara_text_evaluator-1.1.5-py3-none-any.whl (1.3 MB view details)

Uploaded Python 3

File details

Details for the file rara_text_evaluator-1.1.5.tar.gz.

File metadata

  • Download URL: rara_text_evaluator-1.1.5.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.13.2

File hashes

Hashes for rara_text_evaluator-1.1.5.tar.gz
Algorithm Hash digest
SHA256 adfdae5c1cd8610fbb08dcc98d3cb7628a0473163e970b68ca48f4a1eabb2896
MD5 000ba7d863019d38ce5fa2581339270f
BLAKE2b-256 9d7211d688d14c0f0556ede817116fc56d75af40881a95e32da10de215283979

See more details on using hashes here.

File details

Details for the file rara_text_evaluator-1.1.5-py3-none-any.whl.

File metadata

File hashes

Hashes for rara_text_evaluator-1.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 ab0d77a740a7fff24f7b8e9ec538d28d8a22c11d407af104f3b62a724b227e77
MD5 c67fa31d9397a59cf02e2e6aa3e951b2
BLAKE2b-256 835fab076c868bf5f93897c6a45ff4a9cba496d8ea31fd9b1b26dedd43ddb59c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page