Skip to main content

An evaluation framework for Serbian Whisper models.

Project description

AiDA Whisper Evaluation Framework (Serbian)

An evaluation framework for Serbian Whisper models.

Whisper Evaluator 🎤

A simple, modular framework to evaluate fine-tuned Whisper models in Python notebooks.

This library allows you to easily run evaluations on any dataset from the Hugging Face Hub using a simple configuration dictionary. It calculates a comprehensive set of metrics, including WER, CER, BLEU, and ROUGE, and automatically logs all results to a file.

Installation

You can install the library directly from GitHub for latest updates and features. Make sure you have git installed on your system.

pip install git+https://github.com/your-username/whisper-evaluator.git

Quickstart

Using the library in a Google Colab or Jupyter Notebook is straightforward.

from whisper_evaluator import Evaluator
import json

# 1. Define your evaluation configuration
config = {
    "model_args": {
        "name_or_path": "openai/whisper-large-v2", # Your fine-tuned model ID
        "device": "cuda"
    },
    "task_args": {
        "dataset_name": "mozilla-foundation/common_voice_11_0",
        "dataset_subset": "sr", # Serbian language
        "dataset_split": "test[:20]", # Use the first 20 samples for a quick demo
        "audio_column": "audio",
        "text_column": "sentence"
    }
}

# 2. Initialize the evaluator
evaluator = Evaluator(config=config)

# 3. Run the evaluation (logs to 'evaluation_log.txt' by default)
detailed_results, metrics = evaluator.run()

# 4. Analyze the results
print("\n--- Final Metrics ---")
# Pretty print the metrics dictionary
print(json.dumps(metrics, indent=2))

print("\n--- Sample of evaluation details ---")
# Print the first 3 results from the list
for i, result in enumerate(detailed_results[:3]):
    print(f"\n--- Example {i+1} ---")
    print(f"Reference:  {result['reference']}")
    print(f"Prediction: {result['prediction']}")

Project Setup

Follow these steps to set up the AiDA-Whisper-Eval project.


Using Conda

1. Create a new Conda environment

conda create --name aida python=3.12 -y
conda activate aida

2. Install Poetry

pip install poetry

3. Install project dependencies

Navigate to the project's root directory and run:

poetry install

Using Plain Python

1. Create and activate a virtual environment

python -m venv venv

# On Linux/macOS
source venv/bin/activate

# On Windows
.\venv\Scripts\activate

2. Upgrade pip and install Poetry

pip install --upgrade pip
pip install poetry

3. Install project dependencies

From the project's root directory, run:

poetry install
pip install pre-commit

4. Set up pre-commit hooks

poetry run pre-commit install

Verifying Installation

Check installation by running tests:

# On Linux/macOS
make test

# On Windows
poetry run pytest

Your setup is complete!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

whisper_eval_serbian-0.0.33.tar.gz (14.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

whisper_eval_serbian-0.0.33-py3-none-any.whl (14.6 kB view details)

Uploaded Python 3

File details

Details for the file whisper_eval_serbian-0.0.33.tar.gz.

File metadata

  • Download URL: whisper_eval_serbian-0.0.33.tar.gz
  • Upload date:
  • Size: 14.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.12.3 Linux/6.14.0-28-generic

File hashes

Hashes for whisper_eval_serbian-0.0.33.tar.gz
Algorithm Hash digest
SHA256 f0c91a9f1c816fcd479f2afbfe24d55a202c32763342c02dd9b23ce59c51c635
MD5 9d1c7b072faa15161fabd716a1f5c130
BLAKE2b-256 f9c2e25e638bd7c9a48aae8891e7815e54810c208e09c63b4a81746834a2c1ea

See more details on using hashes here.

File details

Details for the file whisper_eval_serbian-0.0.33-py3-none-any.whl.

File metadata

File hashes

Hashes for whisper_eval_serbian-0.0.33-py3-none-any.whl
Algorithm Hash digest
SHA256 41e9a6db62eaec25b3e85d38828754bf32061c547f77eb293485176a7cc3d1ef
MD5 6a638bbeba63d54fc15e642d5a303824
BLAKE2b-256 cda1c0b4d236ce4b60dad72358d41fbe6ee419fa500b98496276871b5b03d63e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page