Métrique AITER
Project description
aiter-metric
AITER is an evaluation metric for large language models (LLMs) focused on veracity.
It measures how factually consistent a generated answer is with respect to a reference text, inspired by the HTER (Human Translation Error Rate) approach.
Method description
For each input request (the user question), we consider:
- Hypothesis: the model’s answer to the request
- Reference: an ideal answer written by a human expert
- Context: all the useful supporting information that may help with corrections
The pipeline runs in three stages:
-
Off-topic filtering (LLM): Using the context, an LLM removes irrelevant or out-of-scope fragments from the hypothesis, producing a filtered hypothesis that only keeps content actually addressing the request.
-
Correction (LLM): Using the reference and context, an LLM produces a corrected hypothesis that minimally edits the hypothesis to make it factually consistent.
-
Edit-distance scoring (TER): We compute TER (Translation Edit Rate) to quantify both veracity and tendency to digress:
TER(hypothesis → corrected_hypothesis)— how much the original answer must change to be factually correct.TER(hypothesis → filtered_hypothesis)— how much unrelated content had to be removed.TER(filtered_hypothesis → corrected_hypothesis)— how much of the relevant content is factually correct.
Lower TER means fewer edits are needed.
Installation
Install from PyPI:
pip install aiter-metric
Or from source:
git clone https://github.com/dieuantoine/aiter-metric.git
cd aiter-metric
pip install -e .
Setup
This package requires access to LLM APIs from Mistral and Gemini. Before using the metric, you must set up your API keys so that the package can query these models.
You can export your keys as environment variables (recommended):
export MISTRAL_API_KEY="your_mistral_api_key"
export GEMINI_API_KEY="your_gemini_api_key"
Alternatively, you can use a .env file or pass the key when initializing the metric.
Quick Start
This package exposes a single entry point:
from aiter import Scorer
Inputs and Outputs
Dataframe
Scorer expects a pandas DataFrame with the following columns:
request_id— unique identifier of the examplerequest— the user question / prompt given to the conversational agentreference— the ideal human-written answercontext— additional information to support correction (can be empty if none)hypothesis— the model’s answer to evaluate
Version/config dictionary
You must also pass a version dictionary to select the method and language:
CODE_VERSION:1,2, or3(recommended:3)LANG: language of your data ("en","fr")REFORMULATION_MODEL: the Gemini or Mistral model name to use for filtering/correction (e.g.,"gemini-2.5-pro"or"mistral-medium-latest")
The method get_available_models() returns a list of all supported Gemini and Mistral model identifiers available for use in the REFORMULATION_MODEL parameter.
Output
After calling the methods reformulation() and scoring(), the results attribute returns a dictionary containing the aggregated mean scores over all evaluated examples and df returns a pandas DataFrame aligned with your input, enriched with additional columns that describe the different processing stages and scores:
| Column | Description |
|---|---|
filtered_hypothesis |
The hypothesis after removing off-topic or irrelevant content (produced by the filtering LLM). |
corrected_hypothesis |
The minimally corrected version of the hypothesis, made factually consistent with the reference and context. |
cor_score |
Correction score = TER(filtered_hypothesis → corrected_hypothesis) |
ot_score |
Off-topic score = TER(hypothesis → filtered_hypothesis) |
score |
Global score = TER(hypothesis → corrected_hypothesis) |
Example
import os
import pandas as pd
from aiter import Scorer
df = pd.DataFrame([
{
"request_id": "001",
"request": "Where is the Eiffel Tower located?",
"reference": "The Eiffel Tower is located in Paris, France.",
"context": "The Eiffel Tower is a landmark in Paris, inaugurated in 1889.",
"hypothesis": "The Eiffel Tower is in Berlin."
}
])
version = {
"CODE_VERSION": 3,
"LANG": "en",
"REFORMULATION_MODEL": "gemini-2.5-pro"
}
scorer = Scorer(
df,
version,
# api_key="YOUR_API_KEY" # if not env vars
)
scorer.reformulation()
scorer.scoring()
print(scorer.head())
Repository Structure
aiter/
├── config/ # Configuration files and default settings for the APIs
├── llm_api/ # Wrappers and utilities to interact with external LLM APIs
├── metric/ # Core implementation of the metric
├── utils/ # Utils functions
└── prompts/ # Prompt templates
License
Distributed under the MIT license. See LICENSE for more details.
Authors
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aiter_metric-0.1.3.tar.gz.
File metadata
- Download URL: aiter_metric-0.1.3.tar.gz
- Upload date:
- Size: 17.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
8116222f58e3edde4182e1caf77e8d659e8695b7f99d43006f7861e32a9db244
|
|
| MD5 |
7ae88df7d3de958dabd334d5625d11eb
|
|
| BLAKE2b-256 |
99f549201ae26f82072ee12fbdf0d37c18d874239e1f48fe72e092710d7df0b4
|
File details
Details for the file aiter_metric-0.1.3-py3-none-any.whl.
File metadata
- Download URL: aiter_metric-0.1.3-py3-none-any.whl
- Upload date:
- Size: 22.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
473ad9f50eeb5873b04bf324418735130918f969fa08419fa4078d15c1fc3af8
|
|
| MD5 |
9d7fac8157108d7ade2668363089a853
|
|
| BLAKE2b-256 |
45b9e4bed6f3f6e849a656e9a27024f55e6c85fa1214826def53e5d7e10dc35a
|