Estimate hallucination risk in LLM answers from uncertainty language, unsupported specifics, citations, and context coverage. Python port of @mukundakatta/hallucination-risk-meter.
Project description
hallucination-risk-meter
Estimate hallucination risk in LLM answers from uncertainty language, unsupported specifics, citations, and context coverage. Zero runtime dependencies.
Python port of @mukundakatta/hallucination-risk-meter. The JS sibling has the original API; this README sticks to the Python surface.
Install
pip install hallucination-risk-meter
Usage
from hallucination_risk_meter import score
answer = "The company earned $42 million in 2023."
ctx = "Acme Inc. reported revenue of $40-44 million for fiscal year 2023."
r = score(answer, context=ctx, citations=[{"id": "1"}])
r.score # float in [0.0, 1.0]
r.signals # list[str] -- triggered heuristics
r.severity # 'low' | 'medium' | 'high'
Signals
Each triggered signal contributes a fixed weight to the final score (clamped to [0, 1]):
| Signal | Weight | Triggers when... |
|---|---|---|
uncertainty_language |
0.10 | Hedge phrases like "I think", "may", "possibly", "not sure" appear. |
unsourced_specifics |
0.30 | Specific numbers, percentages, currency, years, or dates appear in an uncited sentence. |
confident_overreach |
0.20 | "definitely" / "guaranteed" / "always" appear without supporting context overlap. |
unsourced_named_entities |
0.20 | A capitalized multi-word entity appears uncited and absent from the context. |
length_disproportionate |
0.20 | The answer is roughly 3x longer than the supplied context. |
no_citations |
0.15 | The answer asserts something factual-looking but no citations were supplied. |
Severity buckets
score < 0.34->"low"0.34 - 0.66->"medium"score >= 0.67->"high"
Custom signals
Inject precomputed signals from your own classifier:
score("...", signals=["nli_contradiction", "self_consistency_drop"])
Names not in the built-in weight table contribute 0.0 but still appear in r.signals for downstream logging.
API differences from the JS sibling
- Returns a
RiskScoredataclass withscore,signals, andseverityinstead of the JS{risk, reasons, likelyHallucinated}object. - Adds the
signals=parameter for injecting upstream-detector signals. - Adds
severitybucketing (low/medium/high) for convenient guardrail thresholds.
See the JS sibling's README for the full design notes.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hallucination_risk_meter-0.1.0.tar.gz.
File metadata
- Download URL: hallucination_risk_meter-0.1.0.tar.gz
- Upload date:
- Size: 7.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f4d256146229612a0a13a92318088b5b67585ff509e8a047d76958c47232996d
|
|
| MD5 |
45a895982f5e19cb4f99c3f257101aff
|
|
| BLAKE2b-256 |
3f173c91eb0423e839f5d301eee35b98d2174d6ec428a120642a60067ab83c0b
|
File details
Details for the file hallucination_risk_meter-0.1.0-py3-none-any.whl.
File metadata
- Download URL: hallucination_risk_meter-0.1.0-py3-none-any.whl
- Upload date:
- Size: 7.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.14.4
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d01f8c6f9c1dd0eca96c5b9d499b5cafdf0bc8756905bb0106b3065bf634cfa2
|
|
| MD5 |
cffc26e7e8797d012d0da5745ee5fa10
|
|
| BLAKE2b-256 |
745998fd671a4c19bbfd49d69a225e7acaa84a5c8f965938386e0c63e38587de
|