Skip to main content

Lightweight, reliable RAG evaluation for solo AI builders. Faithfulness, relevancy, correctness in 5 lines.

Project description

evalprobe

evalprobe

Demo PyPI License Python

Lightweight, reliable RAG evaluation for solo AI builders. Faithfulness, relevancy, and correctness in 5 lines. Works with any LLM via litellm.

Try it in your browser →

Other RAG eval libraries are heavy, brittle, or focused on enterprise. evalprobe is the smallest thing that actually works.

Install

pip install evalprobe

Use it

from evalprobe import evaluate, EvalSample

sample = EvalSample(
    question="When did the Eiffel Tower open?",
    answer="It opened in 1889 and is 330 meters tall.",
    contexts=["The Eiffel Tower was completed in March 1889 for the World's Fair."],
    ground_truth="The Eiffel Tower opened on 31 March 1889.",
)

result = evaluate(sample, model="gpt-4o-mini")

for s in result.scores:
    print(f"{s.name}: {s.score:.2f}")

Output: faithfulness: 0.50 answer_relevancy: 1.00 answer_correctness: 0.67

The height claim got flagged as unfaithful because the context didn't mention it. That's the whole point.

CLI

Evaluate a JSONL file of samples and write results:

export OPENAI_API_KEY=sk-...

evalprobe eval samples.jsonl \
  --model gpt-4o-mini \
  --output results.jsonl

Each line of samples.jsonl is one sample:

{"question": "...", "answer": "...", "contexts": ["..."], "ground_truth": "..."}

You get a per-metric mean summary on stderr and full per-sample results in results.jsonl.

The metrics

  • faithfulness — fraction of answer claims actually supported by the retrieved contexts. Catches hallucinations.
  • answer_relevancy — how directly the answer addresses the question. Catches evasive or off-topic answers.
  • answer_correctness — F1 score of facts in the answer vs. ground truth. Catches factual errors. Requires ground_truth.

All three return scores in [0.0, 1.0]. Higher is better.

Any LLM, any provider

evalprobe uses litellm under the hood, so any model name litellm understands works:

evaluate(sample, model="gpt-4o-mini")
evaluate(sample, model="groq/llama-3.3-70b-versatile")
evaluate(sample, model="anthropic/claude-3-5-sonnet-latest")
evaluate(sample, model="ollama/llama3.1")

Set the corresponding OPENAI_API_KEY, GROQ_API_KEY, ANTHROPIC_API_KEY, etc.

Why evalprobe

evalprobe ragas langsmith
Install size small heavy heavy
LLM provider any (litellm) partial partial
Errors clear messages sometimes silent n/a
Hosted dashboard coming no yes (paid)
Built for solo devs research/teams enterprise

Status

Pre-alpha. Things will change. Star to follow.

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

evalprobe-0.1.1.tar.gz (7.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

evalprobe-0.1.1-py3-none-any.whl (11.9 kB view details)

Uploaded Python 3

File details

Details for the file evalprobe-0.1.1.tar.gz.

File metadata

  • Download URL: evalprobe-0.1.1.tar.gz
  • Upload date:
  • Size: 7.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for evalprobe-0.1.1.tar.gz
Algorithm Hash digest
SHA256 817f84865586b860f93addac445ff4a7327e9a8dd6e7b71d72b189522f7712ed
MD5 360f4b29aca247451b3cdd6e9e8acf21
BLAKE2b-256 85b556fdf589e118ad2b867ed45566a9d6f5bfde12b4060b965caf7bd77f446a

See more details on using hashes here.

File details

Details for the file evalprobe-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: evalprobe-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 11.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for evalprobe-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c26c0125b719921339f05dd17a58a9525038f7537134686ece484f094d966140
MD5 d6aa988aa06a399bce87303555e41fd6
BLAKE2b-256 c3a69ace232258a7a0ecffcbe4ecf3af352a0176eab09e70907c0e7b22378d85

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page