Skip to main content

A fast, modular reimplementation of RAGAS's FactualCorrectness metric, supporting both open-weight and dedicated LLMs.

Project description

TruthScore

``truthscore` is a fast, modular reimplementation of RAGAS's FactualCorrectness metric, supporting both open-weight and hosted LLMs. It evaluates factual consistency between a user response and a reference passage by breaking down answers into claims and verifying them using Natural Language Inference (NLI).

It is a component of the trutheval framework and is intended for scalable, cost-efficient factuality evaluation.


🔍 What it does

  1. Claim Decomposition: The LLM-generated response is split into atomic factual claims using a lightweight LLM.
  2. Entailment Scoring: Each claim is passed to an NLI model with the reference passage as context.
  3. Final Score: The score reflects how many claims are entailed by the context, in the range [0.0, 1.0].

For more details, see FactualCorrectness.


✨ Key Features

  • 🔁 RAGAS-compatible: Faithfully reimplements the FactualCorrectness metric logic from RAGAS
  • Open-weight LLM support: Works with open-weight models (e.g., Gemma, LLaMA, Mistral via Ollama)
  • 🧠 Plug-and-play: Swap in custom NLI models
  • ⚙️ GPU-accelerated: Recommended for claim decomposition + NLI
  • 🧪 Evaluated: Competitive benchmark results (see trutheval)

📦 Installation

For full open-weight support (LLM hosted with Ollama + CrossEncoders NLI):

pip install truthscore[open]

Otherwise, use the lightweight dependency pick the dependencies that best work for you with:

pip install truthscore

Regarding ollama installation, please check Ollama.

🚀 Quick Start

💡 Open-weight (fully local)

from langchain_community.llms import OllamaLLM
from ragas import SingleTurnSample
from ragas.llms import LangchainLLMWrapper

from truthscore import OpenFactualCorrectness

test_data = {
    "user_input": "What happened in Q3 2024?",
    "reference": "The company saw an 8% rise in Q3 2024, driven by strong marketing and product efforts.",
    "response": "The company experienced an 8% increase in Q3 2024 due to effective marketing strategies and product efforts."
}
sample = SingleTurnSample(**test_data)

evaluator_llm = LangchainLLMWrapper(OllamaLLM(model="gemma3:27b", base_url="http://localhost:11434"))
metric = OpenFactualCorrectness(llm=evaluator_llm)
score = metric.single_turn_score(sample)

print(score)  # e.g. 1.0

☁️ Hosted LLM (e.g., OpenAI)

from openai import OpenAI
from ragas import SingleTurnSample
from ragas.llms import LangchainLLMWrapper

from truthscore import OpenFactualCorrectness

evaluator_llm = LangchainLLMWrapper(OpenAI())
metric = OpenFactualCorrectness(llm=evaluator_llm)

# test_data same as above
score = metric.single_turn_score(SingleTurnSample(**test_data))

⚙️ Custom NLI Models

import torch
from langchain_community.llms import OllamaLLM
from ragas import SingleTurnSample
from ragas.llms import LangchainLLMWrapper
from sentence_transformers import CrossEncoder

from truthscore import OpenFactualCorrectness

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
nli_model = CrossEncoder("cross-encoder/nli-deberta-v3-large")
nli_model.model.to(device)

evaluator_llm = LangchainLLMWrapper(OllamaLLM(model="gemma3:27b", base_url="http://localhost:11434"))
metric = OpenFactualCorrectness(llm=evaluator_llm, nli_model=nli_model)

# test_data same as above
score = metric.single_turn_score(SingleTurnSample(**test_data))

📊 Background

This metric was evaluated across a 500-example benchmark using perturbation levels A0–A4 on top of the Google Natural Questions dataset using truthbench.

See full results in the trutheval project.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

truthscore-0.1.1.tar.gz (5.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

truthscore-0.1.1-py3-none-any.whl (6.1 kB view details)

Uploaded Python 3

File details

Details for the file truthscore-0.1.1.tar.gz.

File metadata

  • Download URL: truthscore-0.1.1.tar.gz
  • Upload date:
  • Size: 5.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.11.6 Linux/5.15.167.4-microsoft-standard-WSL2

File hashes

Hashes for truthscore-0.1.1.tar.gz
Algorithm Hash digest
SHA256 a1aec51396b71687e4d156531abaaab806626e6267e59d085529cb75cde7af97
MD5 e607c04fbf17642d54b8f7c0aa4895b2
BLAKE2b-256 713b4d861e85c34a959e56c230e3d77da1310b7d0cf25f3cce833cfd93b90fea

See more details on using hashes here.

File details

Details for the file truthscore-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: truthscore-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 6.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.11.6 Linux/5.15.167.4-microsoft-standard-WSL2

File hashes

Hashes for truthscore-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 87223d394f6c53cd74055132bb061312572315d7f9766ab037b9f91ba97324c8
MD5 077484473c529d2d70ba1c08265cf679
BLAKE2b-256 94ed533396dd86ea8b027dceb9c9b7595bb871e2c1c14096e8c1be68db557fb7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page