Skip to main content

A fast, modular reimplementation of RAGAS's FactualCorrectness metric, supporting both open-weight and dedicated LLMs.

Project description

TruthScore

``truthscore` is a fast, modular reimplementation of RAGAS's FactualCorrectness metric, supporting both open-weight and hosted LLMs. It evaluates factual consistency between a user response and a reference passage by breaking down answers into claims and verifying them using Natural Language Inference (NLI).

It is a component of the trutheval framework and is intended for scalable, cost-efficient factuality evaluation.


🔍 What it does

  1. Claim Decomposition: The LLM-generated response is split into atomic factual claims using a lightweight LLM.
  2. Entailment Scoring: Each claim is passed to an NLI model with the reference passage as context.
  3. Final Score: The score reflects how many claims are entailed by the context, in the range [0.0, 1.0].

For more details, see FactualCorrectness.


✨ Key Features

  • 🔁 RAGAS-compatible: Faithfully reimplements the FactualCorrectness metric logic from RAGAS
  • Open-weight LLM support: Works with open-weight models (e.g., Gemma, LLaMA, Mistral via Ollama)
  • 🧠 Plug-and-play: Swap in custom NLI models
  • ⚙️ GPU-accelerated: Recommended for claim decomposition + NLI
  • 🧪 Evaluated: Competitive benchmark results (see trutheval)

📦 Installation

For full open-weight support (LLM hosted with Ollama + CrossEncoders NLI):

pip install truthscore[open]

Otherwise, use the lightweight dependency pick the dependencies that best work for you with:

pip install truthscore

Regarding ollama installation, please check Ollama.

🚀 Quick Start

💡 Open-weight (fully local)

from langchain_community.llms import OllamaLLM
from ragas import SingleTurnSample
from ragas.llms import LangchainLLMWrapper

from truthscore import OpenFactualCorrectness

test_data = {
    "user_input": "What happened in Q3 2024?",
    "reference": "The company saw an 8% rise in Q3 2024, driven by strong marketing and product efforts.",
    "response": "The company experienced an 8% increase in Q3 2024 due to effective marketing strategies and product efforts."
}
sample = SingleTurnSample(**test_data)

evaluator_llm = LangchainLLMWrapper(OllamaLLM(model="gemma3:27b", base_url="http://localhost:11434"))
metric = OpenFactualCorrectness(llm=evaluator_llm)
score = metric.single_turn_score(sample)

print(score)  # e.g. 1.0

☁️ Hosted LLM (e.g., OpenAI)

from openai import OpenAI
from ragas import SingleTurnSample
from ragas.llms import LangchainLLMWrapper

from truthscore import OpenFactualCorrectness

evaluator_llm = LangchainLLMWrapper(OpenAI())
metric = OpenFactualCorrectness(llm=evaluator_llm)

# test_data same as above
score = metric.single_turn_score(SingleTurnSample(**test_data))

⚙️ Custom NLI Models

import torch
from langchain_community.llms import OllamaLLM
from ragas import SingleTurnSample
from ragas.llms import LangchainLLMWrapper
from sentence_transformers import CrossEncoder

from truthscore import OpenFactualCorrectness

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
nli_model = CrossEncoder("cross-encoder/nli-deberta-v3-large")
nli_model.model.to(device)

evaluator_llm = LangchainLLMWrapper(OllamaLLM(model="gemma3:27b", base_url="http://localhost:11434"))
metric = OpenFactualCorrectness(llm=evaluator_llm, nli_model=nli_model)

# test_data same as above
score = metric.single_turn_score(SingleTurnSample(**test_data))

📊 Background

This metric was evaluated across a 500-example benchmark using perturbation levels A0–A4 on top of the Google Natural Questions dataset using truthbench.

See full results in the trutheval project.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

truthscore-0.1.0.tar.gz (5.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

truthscore-0.1.0-py3-none-any.whl (6.1 kB view details)

Uploaded Python 3

File details

Details for the file truthscore-0.1.0.tar.gz.

File metadata

  • Download URL: truthscore-0.1.0.tar.gz
  • Upload date:
  • Size: 5.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.11.6 Linux/5.15.167.4-microsoft-standard-WSL2

File hashes

Hashes for truthscore-0.1.0.tar.gz
Algorithm Hash digest
SHA256 f9d42ab8f7125d67be9b43cef3def12050839b98e9e8f98c064bc392382c401e
MD5 0b73ee04f492af359062689a855ff23c
BLAKE2b-256 030609681668f2b0d228a3a9d8fd21011366b19a431e5350b915c5c1f1553c5c

See more details on using hashes here.

File details

Details for the file truthscore-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: truthscore-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 6.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.11.6 Linux/5.15.167.4-microsoft-standard-WSL2

File hashes

Hashes for truthscore-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b9e86016c8026e0475b36b20aafaad00e92dad331c49a979f2f5916bc71a4d6d
MD5 2c923eb42a697c7a4292323ea436392c
BLAKE2b-256 c565924ec8b849c66d3437b218b23b0d2b9ac7363f8eed18b9bbb06399434cb8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page