Skip to main content

A research library for evaluating truthfulness of LLM outputs based on evidence agreement, self-consistency, and retrieval coverage

Project description

TruthScore-LLM

A research-oriented Python library for evaluating the truthfulness of LLM-generated outputs based on evidence agreement, self-consistency, and retrieval coverage.

Overview

TruthScore-LLM implements a multi-dimensional scoring system that assesses the reliability of answers generated by large language models. The library evaluates answers across four key dimensions:

  • Evidence Agreement: How well retrieved evidence documents support the answer (entailment checking)
  • Self-Consistency: Internal coherence and logical consistency of the answer
  • Retrieval Coverage: Comprehensiveness of supporting evidence
  • Language Confidence: Linguistic quality and certainty indicators

These component scores are aggregated into a single truth score (0.0 to 1.0) and a categorical decision (ACCEPT, QUALIFIED, or REFUSE).

Installation

Development Installation

To install the library in development mode:

git clone <repository-url>
cd truthscore
pip install -e .

Future PyPI Installation

Once published, the library will be installable via:

pip install truthscore-llm

Quick Start

from truthscore import TruthScorer

# Initialize scorer
scorer = TruthScorer()

# Evaluate an answer
result = scorer.score(
    question="Does vitamin C prevent the common cold?",
    answer="Vitamin C prevents the common cold."
)

# Access results
print(f"Truth Score: {result['truth_score']:.3f}")
print(f"Decision: {result['decision']}")
print(f"Evidence Score: {result['evidence_score']:.3f}")
print(f"Consistency: {result['consistency']:.3f}")
print(f"Language Confidence: {result['language_confidence']:.3f}")
print(f"Coverage: {result['coverage']:.3f}")

Output Format

The score() method returns a dictionary with the following structure:

{
    "truth_score": float,          # Overall truth score [0.0, 1.0]
    "decision": str,               # "ACCEPT" | "QUALIFIED" | "REFUSE"
    "evidence_score": float,       # Evidence agreement [0.0, 1.0]
    "consistency": float,          # Self-consistency [0.0, 1.0]
    "language_confidence": float,  # Language confidence [0.0, 1.0]
    "coverage": float              # Retrieval coverage [0.0, 1.0]
}

Configuration

You can customize scoring behavior by providing a custom configuration:

from truthscore import TruthScorer, TruthScoreConfig

# Create custom configuration
config = TruthScoreConfig(
    evidence_weight=0.5,
    consistency_weight=0.3,
    coverage_weight=0.15,
    language_weight=0.05,
    accept_threshold=0.80,
    qualified_threshold=0.60
)

# Initialize scorer with custom config
scorer = TruthScorer(config=config)

Project Structure

truthscore/
├── truthscore/           # Main package
│   ├── __init__.py      # Package initialization and exports
│   ├── score.py         # TruthScorer main class
│   ├── config.py        # Configuration management
│   ├── retrieve.py      # Evidence retrieval module
│   ├── nli.py           # Natural Language Inference module
│   ├── consistency.py   # Consistency evaluation module
│   └── coverage.py      # Coverage evaluation module
├── examples/            # Usage examples
│   └── example.py
├── tests/               # Unit tests
│   └── test_score.py
├── README.md            # This file
├── pyproject.toml       # Package metadata
└── setup.cfg            # Setuptools configuration

Running Tests

python -m pytest tests/

Or using unittest:

python -m unittest tests.test_score

Running Examples

python examples/example.py

Research Disclaimer

Important: This library is provided for research purposes. The scoring mechanisms implemented here are experimental and based on placeholder implementations for key components (retrieval, NLI, consistency checking).

  • The current implementations use simple heuristics and are designed to be deterministic for testing purposes.
  • Do not use this library as the sole basis for critical decisions without:
    • Validating results against domain-specific ground truth
    • Replacing placeholder implementations with production-grade systems
    • Calibrating thresholds for your specific use case
    • Conducting thorough evaluation and error analysis

The library is structured to facilitate easy replacement of placeholder components with real systems (e.g., trained NLI models, vector databases for retrieval, consistency checking systems).

Contributing

Contributions are welcome! Please ensure that:

  • Code follows the existing style and structure
  • All tests pass
  • New features include appropriate tests
  • Documentation is updated

License

MIT License

Citation

If you use this library in your research, please cite:

@software{truthscore2024,
  title={TruthScore-LLM: A Research Library for Evaluating Truthfulness of LLM Outputs},
  author={TruthScore Contributors},
  year={2024},
  url={https://github.com/yourusername/truthscore-llm}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

truthscore_llm-0.1.1.tar.gz (13.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

truthscore_llm-0.1.1-py3-none-any.whl (13.2 kB view details)

Uploaded Python 3

File details

Details for the file truthscore_llm-0.1.1.tar.gz.

File metadata

  • Download URL: truthscore_llm-0.1.1.tar.gz
  • Upload date:
  • Size: 13.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.0

File hashes

Hashes for truthscore_llm-0.1.1.tar.gz
Algorithm Hash digest
SHA256 962c736706fb8dc57fc0325f5ba5d978bb3441d287276dbeab77b65a17b6249b
MD5 eee1321f411b57ad6743b78b4e613410
BLAKE2b-256 f1e251b6926bcfcfbae0cf7c9d2b9e56ca5c8ac1a3d59f9c5dd5d85a8b0ac746

See more details on using hashes here.

File details

Details for the file truthscore_llm-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: truthscore_llm-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 13.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.10.0

File hashes

Hashes for truthscore_llm-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 3abeb4da6fd143e714661fbc18bd2d2937691a8e7947a9fec2b18d39ff74d1aa
MD5 2b62475edd57c26b4364762ef57c8815
BLAKE2b-256 73ee5f3676f088331a07d9fabc1d6f2a3dad4a937784cbc1e72e6033ed654879

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page