Skip to main content

Empirical preflight probes for omegaprompt calibration: judge consistency, endpoint schema reliability, context-budget margin, latency, noise floor. Emits PreflightReport records the omegaprompt pipeline consumes via derive_adaptation_plan.

Project description

mini-omega-lock

New to this? Start here: EASY_README.md (English) · EASY_README_KR.md (한국어). Compressed plain-language introductions for readers who find the full doc below intimidating.

License: Apache 2.0 Python Parent

Empirical preflight probes for omegaprompt calibration. Measures the actual environment (judge consistency, endpoint schema reliability, context-budget margin, latency, noise floor) and emits PreflightReport records that feed omegaprompt's derive_adaptation_plan.

pip install mini-omega-lock

Why this is separate from omegaprompt

omegaprompt ships a plugin interface (omegaprompt.preflight.contracts + omegaprompt.preflight.adaptation) but no probe code. Standalone users do not need preflight probes — they run calibration with declared defaults. Users who want adaptive thresholds tuned to their actual infrastructure install this package alongside:

pip install omegaprompt mini-omega-lock

What it measures

Measurement Function What it tells you
Judge consistency measure_judge_consistency Same (response, rubric) scored N times; 1 - CV. Low = noisy judge, need rescore_count > 1.
Endpoint schema reliability probe_strict_schema STRICT_SCHEMA probe success fraction. < 0.9 triggers JSON_OBJECT fallback.
Context budget margin compute_context_margin 1 - (longest_call_tokens / context_window). Negative = overflow.
Performance projection project_performance Mean probe latency → projected calibration wall time.
Noise floor noise_floor_estimate Stdev of fitness under identical parameters. Sets adaptive min_kc4.

The composite entry point is empirical_preflight(), which runs all five in one call and returns the three measurement records omegaprompt's adaptation layer consumes.

Usage

from omegaprompt import make_provider, PreflightReport, derive_adaptation_plan
from omegaprompt.domain.dataset import DatasetItem
from omegaprompt.domain.judge import Dimension, JudgeRubric
from omegaprompt.judges.llm_judge import LLMJudge
from mini_omega_lock import empirical_preflight

judge_provider = make_provider("anthropic")
judge = LLMJudge(provider=judge_provider)
rubric = JudgeRubric(dimensions=[Dimension(name="accuracy", description="x", weight=1.0)])
probe_item = DatasetItem(id="probe", input="2+2", reference="4")

judge_quality, endpoint, performance = empirical_preflight(
    judge=judge,
    rubric=rubric,
    probe_item=probe_item,
    probe_response="4",
    consistency_repeats=3,
    dataset_size_hint=10,
    candidates_expected=20,
)

report = PreflightReport(
    judge_quality=judge_quality,
    endpoint=endpoint,
    performance=performance,
)
plan = derive_adaptation_plan(report=report)
# plan.min_kc4_override, plan.rescore_count, etc.

Design principles

  • No fabricated numbers. Every measurement is computed from a real provider response. No heuristic estimation.
  • Minimal probe budget. Default 3 consistency repeats + 3 schema probes + 1 context-margin compute = 7-10 API calls per preflight. Worth < $0.01 on frontier tiers.
  • Protocol-conformant output. Emits omegaprompt.preflight.contracts.JudgeQualityMeasurement / EndpointMeasurement / PerformanceMeasurement exactly. No shape drift.
  • Composable. Can run alongside mini-antemortem-cli (analytical preflight) into the same PreflightReport.

Validation

All adapter tests mock the provider SDK; no network, no API credits, fully offline. Run with pytest -q.

Relation to the family

  • omega-lock — parameter-calibration framework. The naming "mini-omega-lock" echoes this family; the sensitivity + walk-forward + KC-4 discipline comes from there.
  • omegaprompt — prompt calibration engine. This package feeds its preflight plugin interface.
  • mini-antemortem-cli — analytical sibling. Runs deterministic trap classification over config before calibration.

License

Apache 2.0. See LICENSE.

License history. PyPI distributions of version 0.1.0 were shipped with an MIT LICENSE file. The repository was relicensed to Apache 2.0 on 2026-04-22 (commit ff489a9); 0.2.0 (2026-04-28) and all later versions ship under Apache 2.0. Anyone who installed 0.1.0 holds an MIT license to that copy — license changes do not apply retroactively.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mini_omega_lock-0.2.0.tar.gz (11.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mini_omega_lock-0.2.0-py3-none-any.whl (16.0 kB view details)

Uploaded Python 3

File details

Details for the file mini_omega_lock-0.2.0.tar.gz.

File metadata

  • Download URL: mini_omega_lock-0.2.0.tar.gz
  • Upload date:
  • Size: 11.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.9

File hashes

Hashes for mini_omega_lock-0.2.0.tar.gz
Algorithm Hash digest
SHA256 dedc6d6b6ff9019e7cf89ab1f11d650614ffde50ce747ffd62ad10ea4efb3211
MD5 b40930954d67f6642f76681af46d1266
BLAKE2b-256 1c75598fdfecd396bd1783104fdb70ded42fb5a72ecce29585ae491c5d79afac

See more details on using hashes here.

File details

Details for the file mini_omega_lock-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for mini_omega_lock-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 87ac3fa6f26535138daa0f53cee30363d88c7333020aa7c0d1e720d286453e29
MD5 63a56a1dbd735c08e15413e6d72f0967
BLAKE2b-256 f6cb16983977d55d910e1be4a9e3215ccf0e0964a4e4249598fa4632d0677c04

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page