Skip to main content

A lightweight evaluation framework for large language model outputs

Project description

llm-metrics-lite

MIT License Python Status PyPI version Downloads

llm-metrics-lite

PyPI: https://pypi.org/project/llm-metrics-lite/


llm-metrics-lite

A lightweight, dependency-minimal evaluation toolkit for Large Language Model (LLM) output quality.

License Version Python Versions


Quick StartFeaturesExamplesCLI UsageRoadmapContributingLicense

llm-metrics-lite provides simple, reliable metrics to evaluate generated text from LLMs. It offers coherence scoring, reference-free perplexity, groundedness checks, token statistics, latency utilities, and a clean CLI interface — all without heavy dependencies.


Quick Start

Install from PyPI:

pip install llm-metrics-lite

# llm-metrics-lite

A lightweight, dependency-minimal evaluation toolkit for Large Language Model (LLM) output quality.

## Why This Library Exists

Existing LLM evaluation tools often:

- require large models or embeddings  
- are part of heavy research frameworks  
- depend on GPU or complex installations  
- lack simple, general-purpose APIs  

llm-metrics-lite was created to be:

- minimal and fast  
- easy to install anywhere  
- suitable for research and production  
- extendable and open-source  

## Features

### Core Capabilities

- **Coherence Scoring**  
  Measures similarity between consecutive sentences to estimate textual flow.

- **Reference-Free Perplexity**  
  Character-level n-gram perplexity that works without any pretrained models.

- **Groundedness and Factuality Checks**  
  Compares model output against reference context for basic factual alignment.

- **Latency Measurement**  
  Simple utilities to benchmark model inference time.

- **Token Statistics**  
  Word count, character count, and approximate token usage.

- **Command Line Interface (CLI)**  
  Evaluate text files directly in the terminal.

### Why it is lightweight

- Zero heavy dependencies  
- No transformers, no GPU required  
- Pure Python implementation  

## Quick Start

Install from PyPI:

pip install llm-metrics-lite


Install from source:

git clone https://github.com/supriyabachal/llm_metrics_lite.git cd llm_metrics_lite pip install -e .


## Examples

### Basic Python Usage

```python
from llm_metrics_lite import train_default_perplexity_model, evaluate_output

corpus = [
    "Artificial intelligence enables machines to learn from data.",
    "Language models process and generate human-like text."
]

model = train_default_perplexity_model(corpus)

output = "Generative AI models help automate reasoning tasks."
context = "AI systems can understand language and produce responses."

result = evaluate_output(
    output_text=output,
    context_text=context,
    perplexity_model=model
)

print(result)

CLI Usage

Evaluate output text:

llm-metrics-lite evaluate output.txt

Evaluate with context reference:

llm-metrics-lite evaluate output.txt --context reference.txt
llm-metrics-lite --help

Roadmap

Planned enhancements include:

  • Embedding-based coherence evaluation
  • Semantic groundedness metrics
  • Batch evaluation support
  • Model-output comparison tools
  • Evaluation dashboard and visualizations
  • Integration helpers for RAG pipelines

Show help:

llm-metrics-lite --help

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_metrics_lite-0.2.3.tar.gz (8.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_metrics_lite-0.2.3-py3-none-any.whl (9.5 kB view details)

Uploaded Python 3

File details

Details for the file llm_metrics_lite-0.2.3.tar.gz.

File metadata

  • Download URL: llm_metrics_lite-0.2.3.tar.gz
  • Upload date:
  • Size: 8.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for llm_metrics_lite-0.2.3.tar.gz
Algorithm Hash digest
SHA256 ec8ad69f94f4ccfc988702a5ec115910a92b77b1fd8e685b6a5f5d811b849bf6
MD5 0875ed6f72fd96bc16f7ac403f303017
BLAKE2b-256 e2d8dd0aaa6e25d65678412765b01478210ab1ac39f13f2c05feedb14f4f8052

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_metrics_lite-0.2.3.tar.gz:

Publisher: python-publish.yml on supriyabachal/llm_metrics_lite

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file llm_metrics_lite-0.2.3-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_metrics_lite-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 98fdcb31b8cfd88ae2d836305fab207486e431a22f4f170c7bff069f43d2a77b
MD5 bc23833e7587f0330c3483e1694fa417
BLAKE2b-256 f0f240787f57148e67639494023d861d37f90b330084e38be86796ad229fbccd

See more details on using hashes here.

Provenance

The following attestation bundles were made for llm_metrics_lite-0.2.3-py3-none-any.whl:

Publisher: python-publish.yml on supriyabachal/llm_metrics_lite

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page