A fast, modular reimplementation of RAGAS's FactualCorrectness metric, supporting both open-weight and dedicated LLMs.
Project description
TruthScore
``truthscore` is a fast, modular reimplementation of RAGAS's FactualCorrectness metric, supporting both open-weight and hosted LLMs. It evaluates factual consistency between a user response and a reference passage by breaking down answers into claims and verifying them using Natural Language Inference (NLI).
It is a component of the trutheval framework and is intended for scalable, cost-efficient
factuality evaluation.
🔍 What it does
- Claim Decomposition: The LLM-generated response is split into atomic factual claims using a lightweight LLM.
- Entailment Scoring: Each claim is passed to an NLI model with the reference passage as context.
- Final Score: The score reflects how many claims are entailed by the context, in the range
[0.0, 1.0].
For more details, see FactualCorrectness.
✨ Key Features
- 🔁 RAGAS-compatible: Faithfully reimplements the
FactualCorrectnessmetric logic from RAGAS - ✅ Open-weight LLM support: Works with open-weight models (e.g., Gemma, LLaMA, Mistral via Ollama)
- 🧠 Plug-and-play: Swap in custom NLI models
- ⚙️ GPU-accelerated: Recommended for claim decomposition + NLI
- 🧪 Evaluated: Competitive benchmark results (see
trutheval)
📦 Installation
For full open-weight support (LLM hosted with Ollama + CrossEncoders NLI):
pip install truthscore[open]
Otherwise, use the lightweight dependency pick the dependencies that best work for you with:
pip install truthscore
Regarding ollama installation, please check Ollama.
🚀 Quick Start
💡 Open-weight (fully local)
from langchain_community.llms import OllamaLLM
from ragas import SingleTurnSample
from ragas.llms import LangchainLLMWrapper
from truthscore import OpenFactualCorrectness
test_data = {
"user_input": "What happened in Q3 2024?",
"reference": "The company saw an 8% rise in Q3 2024, driven by strong marketing and product efforts.",
"response": "The company experienced an 8% increase in Q3 2024 due to effective marketing strategies and product efforts."
}
sample = SingleTurnSample(**test_data)
evaluator_llm = LangchainLLMWrapper(OllamaLLM(model="gemma3:27b", base_url="http://localhost:11434"))
metric = OpenFactualCorrectness(llm=evaluator_llm)
score = metric.single_turn_score(sample)
print(score) # e.g. 1.0
☁️ Hosted LLM (e.g., OpenAI)
from openai import OpenAI
from ragas import SingleTurnSample
from ragas.llms import LangchainLLMWrapper
from truthscore import OpenFactualCorrectness
evaluator_llm = LangchainLLMWrapper(OpenAI())
metric = OpenFactualCorrectness(llm=evaluator_llm)
# test_data same as above
score = metric.single_turn_score(SingleTurnSample(**test_data))
⚙️ Custom NLI Models
import torch
from langchain_community.llms import OllamaLLM
from ragas import SingleTurnSample
from ragas.llms import LangchainLLMWrapper
from sentence_transformers import CrossEncoder
from truthscore import OpenFactualCorrectness
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
nli_model = CrossEncoder("cross-encoder/nli-deberta-v3-large")
nli_model.model.to(device)
evaluator_llm = LangchainLLMWrapper(OllamaLLM(model="gemma3:27b", base_url="http://localhost:11434"))
metric = OpenFactualCorrectness(llm=evaluator_llm, nli_model=nli_model)
# test_data same as above
score = metric.single_turn_score(SingleTurnSample(**test_data))
📊 Background
This metric was evaluated across a 500-example benchmark using perturbation levels A0–A4 on top of the Google Natural Questions dataset using truthbench.
See full results in the trutheval project.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file truthscore-0.2.0.tar.gz.
File metadata
- Download URL: truthscore-0.2.0.tar.gz
- Upload date:
- Size: 5.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.11.6 Linux/5.15.167.4-microsoft-standard-WSL2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
320ff9f8d90f247b0701ca19f7c9086d84373811ad7ba240737cc5d235b4b75e
|
|
| MD5 |
72a631edbca8603ed845b131b8818f34
|
|
| BLAKE2b-256 |
0ac1859efe0bd799018c0df25d73cc5cd05488fb3c4912efdf7bcf0a0f1c1dfa
|
File details
Details for the file truthscore-0.2.0-py3-none-any.whl.
File metadata
- Download URL: truthscore-0.2.0-py3-none-any.whl
- Upload date:
- Size: 6.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.8.2 CPython/3.11.6 Linux/5.15.167.4-microsoft-standard-WSL2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c5d5cee4d63ef710817766699804d267ec0c51f30bcc6a6f29eb88378e676a24
|
|
| MD5 |
b44705f5004d89da86dddf8ba0055096
|
|
| BLAKE2b-256 |
212f5f2163ab91e0aaeae78ac425be271d551312ae90739ab015bf946d9c103e
|