Skip to main content

No project description provided

Project description

AutoEvaluator: An LLM based LLM Evaluator

AutoEvaluator is a Python library that speeds up the large language models (LLMs) output generation QC work. It provides a simple, transparent, and user-friendly API to identify the True Positives (TP), False Positives (FP), and False Negatives (FN) statements based the generated statement and ground truth provided. Get ready to turbocharge your LLM evaluations!

Autoevaluator PyPI - Downloads

Static Badge Static Badge Twitter Follow

Features:

  • Evaluate LLM outputs against a reference dataset or human judgement.
  • Generate TP, FP, and FN sentences based on ground truth provided
  • Calculate Precision, Recall and F1 score

Installation

Autoevaluator requires Python 3.9 and several dependencies. You can install autoevaluator:

pip install autoevaluator

Usage

  1. Prepare your data:

    • Create a dataset containing LLM outputs and their corresponding ground truth labels.
    • The format of the data can be customized depending on the evaluation task.
    • Example: A CSV file with columns for "prompt," "llm_output," and "ground_truth"
  2. setup environment variables

import os
os.environ["OPENAI_API_KEY"] = "<OPENAI_API_KEY>"
os.environ["AZURE_OPENAI_API_KEY"] = "<AZURE_OPENAI_API_KEY>"
os.environ["AZURE_OPENAI_ENDPOINT"] = "<AZURE_OPENAI_ENDPOINT>"
os.environ["DEPLOYMENT"] = "<azure>/<not-azure>"
  1. run autoevaluator
from autoevaluator import evaluate
eval_results = evaluate(generated_statement, ground_truth)
  1. Output:
    • The script will generate a dictionary with the following information:
      • TP, FP, and FN sentences
      • Precision, Recall and F1 score

License:

This project is licensed under the MIT License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

autoevaluator-0.2.5.tar.gz (4.7 kB view hashes)

Uploaded Source

Built Distribution

autoevaluator-0.2.5-py3-none-any.whl (6.4 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page