Skip to main content

LLM calibration metrics for pydantic-evals: ECE, MCE, ACE, Brier, reliability diagrams, temperature/Platt/isotonic scaling, semantic entropy.

Project description

pydantic-cal

OpenSSF Scorecard

Accuracy answers "is the model right?". Calibration answers "when the model says it's 90% confident, is it right 90% of the time?". An LLM can be 91% accurate and 72% miscalibrated at the same time. Every eval library on PyPI today reports the first number; none report the second. pydantic-cal is the calibration layer for pydantic-evals.

pip install pydantic-cal
import numpy as np
from pydantic_cal import ece, mce, ace, brier, brier_decomposition, TemperatureScaler

# Pull (confidence, correct) pairs from any eval run
confidences = np.array([0.95, 0.80, 0.70, 0.60, 0.45, 0.30])
correct     = np.array([1, 1, 0, 1, 0, 0])

print(f"ECE  = {ece(confidences, correct):.4f}")     # Expected calibration error
print(f"ACE  = {ace(confidences, correct):.4f}")     # Adaptive (equal-mass bins)
print(f"MCE  = {mce(confidences, correct):.4f}")     # Worst-case bin gap
print(f"Brier= {brier(confidences, correct):.4f}")   # MSE confidence vs outcome

decomp = brier_decomposition(confidences, correct, n_bins=10)
print(f"Reliability  {decomp.reliability:.4f}  (the calibration component)")
print(f"Resolution   {decomp.resolution:.4f}   (the discrimination component)")
print(f"Uncertainty  {decomp.uncertainty:.4f}  (intrinsic, can't be reduced)")

Fit a temperature scaler on a calibration split and recheck:

scaler = TemperatureScaler().fit(cal_confidences, cal_labels)
ece_after = ece(scaler.transform(test_confidences), test_labels)

Why this exists

Every other eval framework reports accuracy + LLM-judge variants. Surveyed nine of them - pydantic-evals, openai-evals, lm-eval-harness, deepeval, promptfoo, ragas, langfuse, langchain.evaluation, helicone - and not one ships ECE, Brier, or a reliability diagram. The gap is not subtle.

Calibration matters most when downstream code routes on confidence: human-in-the-loop fallback, conformal prediction, eval-test-suite pruning, RAG re-ranking, gating UX disclaimers. A miscalibrated 0.85 confidence routes the same as a calibrated 0.85 confidence and the system breaks silently.

Modules

Module What it does
pydantic_cal.metrics ECE / MCE / ACE / Brier / reliability curves
pydantic_cal.decomposition Murphy 1973 partition: reliability + resolution + uncertainty
pydantic_cal.scalers Post-hoc calibration: TemperatureScaler / PlattScaler / IsotonicScaler
pydantic_cal.bootstrap Bootstrap CIs + paired before/after tests
pydantic_cal._geometry Internal: Fisher-Rao distance, Jensen-Shannon, Bhattacharyya, Hellinger, KL, Bregman, alpha-divergence

The _geometry module ships textbook information-geometry primitives (Amari 1985, Chentsov 1972, Nielsen 2018, Lin 1991) on the probability simplex. It is internal because no second consumer needs it yet; it will lift to a standalone f3d1-information-geometry package on PyPI once that changes.

Pydantic-evals integration

Drops directly into dataset.evaluate(report_evaluators=[...]). Each evaluator pulls (confidence, correct) pairs out of the report via the case metrics + assertions and emits a pydantic-evals ScalarResult (ECE / MCE / ACE / Brier) or LinePlot (ReliabilityDiagram).

pip install pydantic-cal[pydantic-evals]
from pydantic_evals import Case, Dataset
from pydantic_cal.evaluators import ECE, Brier, ReliabilityDiagram

dataset = Dataset(cases=[
    Case(name="q1", inputs={"prompt": "..."}, expected_output="yes",
         metadata={"confidence": 0.92}),
    Case(name="q2", inputs={"prompt": "..."}, expected_output="no",
         metadata={"confidence": 0.51}),
    # ...
])

report = await dataset.evaluate(
    task_fn,
    report_evaluators=[ECE(n_bins=10), Brier(), ReliabilityDiagram()],
)

By convention each evaluator reads case.metrics["confidence"] (falling back to case.metadata["confidence"]) and case.assertions["correct"] (falling back to case.expected_output == case.output). All three knobs are constructor-overridable: confidence_field=, correct_field=, correct_fn=.

What's behind pydantic_cal.crazy

A paper-locked novel ensemble-structure scaler. Imports raise ImportError until f3d1 paper 1 publishes. The public stack above is unconditionally available and does not depend on the gated method. See src/pydantic_cal/crazy/__init__.py for the unlock conditions.

Layout

pydantic-cal/
  src/pydantic_cal/
    __init__.py        public API
    metrics.py         ECE / MCE / ACE / Brier / reliability curves
    decomposition.py   Murphy 1973 partition
    scalers.py         temperature / Platt / isotonic
    bootstrap.py       CI machinery
    _geometry.py       internal info-geometry kernel
    crazy/             paper-locked gate
  examples/smoke.py    end-to-end sanity check
  tests/               pytest suite

Sibling projects

The f3d1 ecosystem:

  • f3dx - Rust runtime your Python imports. Drop-in for openai + anthropic SDKs with native SSE streaming, agent loop with concurrent tool dispatch, OTel emission. pip install f3dx.
  • tracewright - Trace-replay adapter for pydantic-evals. Read an f3dx or pydantic-ai logfire JSONL trace, get a pydantic_evals.Dataset. pip install tracewright.
  • f3dx-cache - Content-addressable LLM response cache + replay. redb + RFC 8785 JCS + BLAKE3. pip install f3dx-cache.
  • f3dx-router - In-process Rust router for LLM providers. Hedged-parallel + 429/5xx hot-swap. pip install f3dx-router.
  • f3dx-bench - Public real-prod-traffic LLM benchmark dashboard. CF Worker + R2 + duckdb-wasm. Live.
  • llmkit - Hosted API gateway with budget enforcement, session tracking, cost dashboards, MCP server. llmkit.sh.
  • keyguard - Security linter for open source projects. Finds and fixes what others only report.

License

MIT.

References

  • Amari, S. (1985). Differential-Geometrical Methods in Statistics. Springer.
  • Chentsov, N. N. (1972). Statistical Decision Rules and Optimal Inference.
  • Murphy, A. H. (1973). A New Vector Partition of the Probability Score.
  • Lin, J. (1991). Divergence Measures Based on the Shannon Entropy. IEEE TIT.
  • Naeini, M. P., Cooper, G. F., & Hauskrecht, M. (2015). Obtaining Well Calibrated Probabilities Using Bayesian Binning. AAAI.
  • Nixon, J., et al. (2019). Measuring Calibration in Deep Learning. CVPR Workshops.
  • Nielsen, F. (2018). On Information Projections, Bregman Divergences and the Centroids of Continuous Probability Distributions. ICASSP.
  • Guo, C., Pleiss, G., Sun, Y., & Weinberger, K. Q. (2017). On Calibration of Modern Neural Networks. ICML.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pydantic_cal-0.0.3.tar.gz (21.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pydantic_cal-0.0.3-py3-none-any.whl (20.0 kB view details)

Uploaded Python 3

File details

Details for the file pydantic_cal-0.0.3.tar.gz.

File metadata

  • Download URL: pydantic_cal-0.0.3.tar.gz
  • Upload date:
  • Size: 21.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pydantic_cal-0.0.3.tar.gz
Algorithm Hash digest
SHA256 386d7b4a6575e1c4a110c5fe66f939f406d84ffa8b1052e502bcea83a87edd41
MD5 96a41f69b7ad45abc96d37d6a306cde3
BLAKE2b-256 1148c41aa2500c31d79c87c7de78b2bd819a43de72597c48dc0d7a4e4615c1b1

See more details on using hashes here.

Provenance

The following attestation bundles were made for pydantic_cal-0.0.3.tar.gz:

Publisher: release.yml on smigolsmigol/pydantic-cal

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file pydantic_cal-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: pydantic_cal-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 20.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for pydantic_cal-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 f34da3d2b638a898ed69bc04bdb853a69670111243a1eecd9abcf11bfffb68f1
MD5 d545d0c7ed247c48a02bde988c513a5f
BLAKE2b-256 ecb1a8f75efca5ac125ac14148443d18d405953911f50ced1368393240a76c84

See more details on using hashes here.

Provenance

The following attestation bundles were made for pydantic_cal-0.0.3-py3-none-any.whl:

Publisher: release.yml on smigolsmigol/pydantic-cal

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page