Lightweight, reliable RAG evaluation for solo AI builders. Faithfulness, relevancy, correctness in 5 lines.
Project description
evalprobe
Lightweight, reliable RAG evaluation for solo AI builders. Faithfulness, relevancy, and correctness in 5 lines. Works with any LLM via litellm.
Other RAG eval libraries are heavy, brittle, or focused on enterprise. evalprobe is the smallest thing that actually works.
Install
pip install evalprobe
Use it
from evalprobe import evaluate, EvalSample
sample = EvalSample(
question="When did the Eiffel Tower open?",
answer="It opened in 1889 and is 330 meters tall.",
contexts=["The Eiffel Tower was completed in March 1889 for the World's Fair."],
ground_truth="The Eiffel Tower opened on 31 March 1889.",
)
result = evaluate(sample, model="gpt-4o-mini")
for s in result.scores:
print(f"{s.name}: {s.score:.2f}")
Output: faithfulness: 0.50 answer_relevancy: 1.00 answer_correctness: 0.67
The height claim got flagged as unfaithful because the context didn't mention it. That's the whole point.
CLI
Evaluate a JSONL file of samples and write results:
export OPENAI_API_KEY=sk-...
evalprobe eval samples.jsonl \
--model gpt-4o-mini \
--output results.jsonl
Each line of samples.jsonl is one sample:
{"question": "...", "answer": "...", "contexts": ["..."], "ground_truth": "..."}
You get a per-metric mean summary on stderr and full per-sample results in results.jsonl.
The metrics
- faithfulness — fraction of answer claims actually supported by the retrieved contexts. Catches hallucinations.
- answer_relevancy — how directly the answer addresses the question. Catches evasive or off-topic answers.
- answer_correctness — F1 score of facts in the answer vs. ground truth. Catches factual errors. Requires
ground_truth.
All three return scores in [0.0, 1.0]. Higher is better.
Any LLM, any provider
evalprobe uses litellm under the hood, so any model name litellm understands works:
evaluate(sample, model="gpt-4o-mini")
evaluate(sample, model="groq/llama-3.3-70b-versatile")
evaluate(sample, model="anthropic/claude-3-5-sonnet-latest")
evaluate(sample, model="ollama/llama3.1")
Set the corresponding OPENAI_API_KEY, GROQ_API_KEY, ANTHROPIC_API_KEY, etc.
Why evalprobe
| evalprobe | ragas | langsmith | |
|---|---|---|---|
| Install size | small | heavy | heavy |
| LLM provider | any (litellm) | partial | partial |
| Errors | clear messages | sometimes silent | n/a |
| Hosted dashboard | coming | no | yes (paid) |
| Built for | solo devs | research/teams | enterprise |
Status
Pre-alpha. Things will change. Star to follow.
License
MIT
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file evalprobe-0.1.0-py3-none-any.whl.
File metadata
- Download URL: evalprobe-0.1.0-py3-none-any.whl
- Upload date:
- Size: 11.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.11.8 {"installer":{"name":"uv","version":"0.11.8","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0d6fcba2743dd56a25643756f9ea6661a1733d6f1cf6c1000e3448851e70d43e
|
|
| MD5 |
fbd63db02238e12792af7bd2f6c0beb5
|
|
| BLAKE2b-256 |
754c024138dfafa6518aba883758119bbea2d4a7eab60c6dd80e2da56cdd31e2
|