Skip to main content

Indox Judge

Project description

inDoxJudge

Typing SVG

License PyPI Python Downloads

Discord GitHub stars

Official WebsiteDocumentationDiscord

NEW: Subscribe to our mailing list for updates and news!

Welcome to IndoxJudge! This repository provides a comprehensive suite of evaluation metrics for assessing the performance and quality of large language models (LLMs). Whether you're a researcher, developer, or enthusiast, this toolkit offers essential tools to measure various aspects of LLMs, including knowledge retention, bias, toxicity, and more.

IndoxJudge Evaluate LLMs with metrics & Model Safety

Overview

IndoxJudge is designed to provide a standardized and extensible framework for evaluating LLMs. With a focus on accuracy, fairness, and relevancy, this toolkit supports a wide range of evaluation metrics and is continuously updated to include the latest advancements in the field.

Features

  • Comprehensive Metrics: Evaluate LLMs across multiple dimensions, including accuracy, bias, toxicity, and contextual relevancy.
  • RAG Evaluation: Includes specialized metrics for evaluating retrieval-augmented generation (RAG) models.
  • Safety Evaluation: Assess the safety of model outputs, focusing on toxicity, bias, and ethical considerations.
  • Extensible Framework: Easily integrate new metrics or customize existing ones to suit specific needs.
  • User-Friendly Interface: Intuitive and easy-to-use interface for seamless evaluation.
  • Continuous Updates: Regular updates to incorporate new metrics and improvements.

Supported Models

IndoxJudge currently supports the following LLM models:

  • OpenAi
  • GoogleAi
  • IndoxApi
  • HuggingFaceModel
  • Mistral
  • Pheonix # Coming Soon - You may follow the progress phoenix_cli or phoenix
  • Ollama

Metrics

IndoxJudge includes the following metrics, with more being added:

  • GEval: General evaluation metric for LLMs.
  • KnowledgeRetention: Assesses the ability of LLMs to retain factual information.
  • BertScore: Measures the similarity between generated and reference sentences.
  • Toxicity: Evaluates the presence of toxic content in model outputs.
  • Bias: Analyzes the potential biases in LLM outputs.
  • Hallucination: Identifies instances where the model generates false or misleading information.
  • Faithfulness: Checks the alignment of generated content with source material.
  • ContextualRelevancy: Assesses the relevance of responses in context.
  • Rouge: Measures the overlap of n-grams between generated and reference texts.
  • BLEU: Evaluates the quality of text generation based on precision.
  • AnswerRelevancy: Assesses the relevance of answers to questions.
  • METEOR: Evaluates machine translation quality.
  • Gruen: Measures the quality of generated text by assessing grammaticality, redundancy, and focus.
  • Overallscore: Provides an overall evaluation score for LLMs which is a weighted average of multiple metrics.
  • MCDA: Multi-Criteria Decision Analysis for evaluating LLMs.

Installation

To install IndoxJudge, follow these steps:

git clone https://github.com/yourusername/indoxjudge.git
cd indoxjudge

Setting Up the Python Environment

If you are running this project in your local IDE, please create a Python environment to ensure all dependencies are correctly managed. You can follow the steps below to set up a virtual environment named indox_judge:

Windows

  1. Create the virtual environment:
python -m venv indox_judge
  1. Activate the virtual environment:
indox_judge\Scripts\activate

macOS/Linux

  1. Create the virtual environment:
    python3 -m venv indox_judge
    

2. **Activate the virtual environment:**
    ```bash
   source indox_judge/bin/activate

Install Dependencies

Once the virtual environment is activated, install the required dependencies by running:

pip install -r requirements.txt

Usage

To use IndoxJudge, load your API key, select the model, and choose the evaluation metrics. Here's an example demonstrating how to evaluate a model's response for faithfulness:

import os
from dotenv import load_dotenv

# Load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

# Import IndoxJudge and supported models
from indoxJudge.piplines import CustomEvaluator
from indoxJudge.models import OpenAi
from indoxJudge.metrics import Faithfulness

# Initialize the model with your API key
model = OpenAi(api_key=OPENAI_API_KEY,model="gpt-4o")

# Define your query and retrieval context
query = "What are the benefits of a Mediterranean diet?"
retrieval_context = [
    "The Mediterranean diet emphasizes eating primarily plant-based foods, such as fruits and vegetables, whole grains, legumes, and nuts. It also includes moderate amounts of fish and poultry, and low consumption of red meat. Olive oil is the main source of fat, providing monounsaturated fats which are beneficial for heart health.",
    "Research has shown that the Mediterranean diet can reduce the risk of heart disease, stroke, and type 2 diabetes. It is also associated with improved cognitive function and a lower risk of Alzheimer's disease. The diet's high content of fiber, antioxidants, and healthy fats contributes to its numerous health benefits.",
    "A Mediterranean diet has been linked to a longer lifespan and a reduced risk of chronic diseases. It promotes healthy aging and weight management due to its emphasis on whole, unprocessed foods and balanced nutrition."
]

# Obtain the model's response
response = "The Mediterranean diet is known for its health benefits, including reducing the risk of heart disease, stroke, and diabetes. It encourages the consumption of fruits, vegetables, whole grains, nuts, and olive oil, while limiting red meat. Additionally, this diet has been associated with better cognitive function and a reduced risk of Alzheimer's disease, promoting longevity and overall well-being."

# Initialize the Faithfulness metric
faithfulness_metrics = Faithfulness(llm_response=response, retrieval_context=retrieval_context)

# Create an evaluator with the selected metrics
evaluator = CustomEvaluator(metrics=[faithfulness_metrics], model=model)

# Evaluate the response
faithfulness_result = evaluator.judge()

# Output the evaluation result
print(faithfulness_result)

Example Output

{
  "faithfulness": {
    "claims": [


      "The Mediterranean diet is known for its health benefits.",
      "The Mediterranean diet reduces the risk of heart disease.",
      "The Mediterranean diet reduces the risk of stroke.",
      "The Mediterranean diet reduces the risk of diabetes.",
      "The Mediterranean diet encourages the consumption of fruits.",
      "The Mediterranean diet encourages the consumption of vegetables.",
      "The Mediterranean diet encourages the consumption of whole grains.",
      "The Mediterranean diet encourages the consumption of nuts.",
      "The Mediterranean diet encourages the consumption of olive oil.",
      "The Mediterranean diet limits red meat consumption.",
      "The Mediterranean diet is associated with better cognitive function.",
      "The Mediterranean diet is associated with a reduced risk of Alzheimer's disease.",
      "The Mediterranean diet promotes longevity.",
      "The Mediterranean diet promotes overall well-being."
    ],
    "truths": [
      "The Mediterranean diet is known for its health benefits.",
      "The Mediterranean diet reduces the risk of heart disease, stroke, and diabetes.",
      "The Mediterranean diet encourages the consumption of fruits, vegetables, whole grains, nuts, and olive oil.",
      "The Mediterranean diet limits red meat consumption.",
      "The Mediterranean diet has been associated with better cognitive function.",
      "The Mediterranean diet has been associated with a reduced risk of Alzheimer's disease.",
      "The Mediterranean diet promotes longevity and overall well-being."
    ],
    "reason": "The score is 1.0 because the 'actual output' aligns perfectly with the information presented in the 'retrieval context', showcasing the health benefits, disease risk reduction, cognitive function improvement, and overall well-being promotion of the Mediterranean diet."
  }
}

Roadmap

We have an exciting roadmap planned for IndoxJudge:

Plan
Integration of additional metrics such as Diversity and Coherence.
Introduction of a graphical user interface (GUI) for easier evaluation.
Expansion of the toolkit to support evaluation in multiple languages.
Release of a benchmarking suite for standardizing LLM evaluations.

Contributing

We welcome contributions from the community! If you'd like to contribute, please fork the repository and create a pull request. For major changes, please open an issue first to discuss what you would like to change.

  1. Fork the repository
  2. Create a new branch (git checkout -b feature-branch)
  3. Commit your changes (git commit -am 'Add new feature')
  4. Push to the branch (git push origin feature-branch)
  5. Create a pull request

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

indoxjudge-0.0.5.tar.gz (204.7 kB view details)

Uploaded Source

Built Distribution

indoxJudge-0.0.5-py3-none-any.whl (300.9 kB view details)

Uploaded Python 3

File details

Details for the file indoxjudge-0.0.5.tar.gz.

File metadata

  • Download URL: indoxjudge-0.0.5.tar.gz
  • Upload date:
  • Size: 204.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.0

File hashes

Hashes for indoxjudge-0.0.5.tar.gz
Algorithm Hash digest
SHA256 4cec3495a448c0c0a8f1de4bb883870a6fda2d9b434abbff5e29df6768231f3e
MD5 fe1f7e549bd2c3604dfcd40666b44a5d
BLAKE2b-256 1fdd402c0a499b5ab83684d306b5d4b2407b9e2259a7f7d698ca81472d4f9a1a

See more details on using hashes here.

File details

Details for the file indoxJudge-0.0.5-py3-none-any.whl.

File metadata

  • Download URL: indoxJudge-0.0.5-py3-none-any.whl
  • Upload date:
  • Size: 300.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.0

File hashes

Hashes for indoxJudge-0.0.5-py3-none-any.whl
Algorithm Hash digest
SHA256 68a636adb93015afa430ffe62baaa1938e3cb1f5515d7ab6330b7daa0e36adf1
MD5 dfe5ed621380ab7fbaa0171d3218dc5e
BLAKE2b-256 b188e2ef218a88c9911ac4b6c57583803e47e6988aae428807ffdd8fd96844a5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page