Skip to main content

Evaluation engine: RAGAS, DeepEval, LLM-as-Judge, and audit report generation

Project description

rag-forge-evaluator

RAG pipeline evaluation engine for the RAG-Forge toolkit: RAGAS, DeepEval, LLM-as-Judge, and the RAG Maturity Model.

Installation

pip install rag-forge-evaluator

Usage

from rag_forge_evaluator.assess import RMMAssessor

assessor = RMMAssessor()
result = assessor.assess(config={
    "retrieval_strategy": "hybrid",
    "input_guard_configured": True,
    "output_guard_configured": True,
})
print(result.badge)  # e.g., "RMM-3 Better Trust"

Features

  • RMM (RAG Maturity Model) scoring (levels 0-5)
  • RAGAS, DeepEval, and LLM-as-Judge evaluators
  • Golden set management with traffic sampling
  • Cost estimation
  • HTML and PDF report generation

Bring your own judge provider

rag-forge-evaluator ships with Claude and OpenAI judges out of the box, but the JudgeProvider protocol is intentionally minimal so you can plug in any LLM — Gemini, Cohere, Bedrock, Ollama, vLLM, or a private model behind your own gateway. Implementing one is ~20 lines:

# my_gemini_judge.py
import os
import google.generativeai as genai


class GeminiJudge:
    """Minimal judge implementation backed by Google Gemini."""

    def __init__(self, model: str = "gemini-2.5-pro", api_key: str | None = None) -> None:
        key = api_key or os.environ.get("GOOGLE_API_KEY")
        if not key:
            raise ValueError("GOOGLE_API_KEY not set")
        genai.configure(api_key=key)
        self._model_name = model
        self._client = genai.GenerativeModel(model)

    def judge(self, system_prompt: str, user_prompt: str) -> str:
        response = self._client.generate_content(
            [system_prompt, user_prompt],
            generation_config={"max_output_tokens": 4096},
        )
        return response.text or ""

    def model_name(self) -> str:
        return self._model_name

Wire it into an audit by passing the instance directly to LLMJudgeEvaluator:

from my_gemini_judge import GeminiJudge
from rag_forge_evaluator.metrics.llm_judge import LLMJudgeEvaluator

judge = GeminiJudge(model="gemini-2.5-pro")
evaluator = LLMJudgeEvaluator(judge=judge)
result = evaluator.evaluate(samples)

The protocol contract:

class JudgeProvider(Protocol):
    def judge(self, system_prompt: str, user_prompt: str) -> str: ...
    def model_name(self) -> str: ...

That's it. Anything that responds to those two methods works. Implementation hints:

  • Always set max_tokens >= 4096 for faithfulness/hallucination metrics. Long responses produce 30-50 enumerated claims; smaller budgets truncate the JSON mid-array and the metric ends up skipped.
  • Wrap your client with retry logic for transient 429/5xx. The Anthropic and OpenAI SDKs honor a max_retries constructor arg with built-in exponential backoff — most provider SDKs offer something similar.
  • Return the raw response text, including any prose around the JSON. The shared response parser handles code fences, leading prose, trailing prose, and truncated output, so you don't need to clean anything up.

First-party Gemini, Bedrock, and Ollama judges are tracked for v0.1.2.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rag_forge_evaluator-0.1.3.tar.gz (57.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

rag_forge_evaluator-0.1.3-py3-none-any.whl (54.8 kB view details)

Uploaded Python 3

File details

Details for the file rag_forge_evaluator-0.1.3.tar.gz.

File metadata

  • Download URL: rag_forge_evaluator-0.1.3.tar.gz
  • Upload date:
  • Size: 57.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for rag_forge_evaluator-0.1.3.tar.gz
Algorithm Hash digest
SHA256 60bda3bd57f09e29d0a9e68df0f7cd35c445bcedc420837342478651b80ac4d5
MD5 ee719d77c26e003c81bbc0221b096e52
BLAKE2b-256 1e54a5e5ac4a8749d830817ebe2d9a6266376882a93c1302cb566521febcbe0d

See more details on using hashes here.

Provenance

The following attestation bundles were made for rag_forge_evaluator-0.1.3.tar.gz:

Publisher: publish.yml on hallengray/rag-forge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file rag_forge_evaluator-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for rag_forge_evaluator-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 cd02dbc9e4b3daf2f8990dfcd6e34f085330cd35e912bafdcf334cbc3aeacc72
MD5 818a3a0f105d278cb15081711ec9c80f
BLAKE2b-256 e8b46d5d427c7226cb6532495ed3ffc6dd442d43bfa8f7f6f8c15dc316970e4a

See more details on using hashes here.

Provenance

The following attestation bundles were made for rag_forge_evaluator-0.1.3-py3-none-any.whl:

Publisher: publish.yml on hallengray/rag-forge

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page