Skip to main content

Inference-time governance layer for LLMs — PEF world-state, claim verification, forensic audit

This project has been archived.

The maintainers of this project have marked this project as archived. No new releases are expected.

Project description

aurora-lens

Deterministic governance layer between your application and any LLM. Maintains ground-truth world state (PEF) independently of the model, verifies LLM output against that state, and applies policy-driven continuation control before the response reaches your users.

Version 2.0.0 — production-ready, 800+ deterministic tests, forensic audit chain.

Architecture

User Input
    |
[Interpretation] -- extract entities, relationships, temporal signals (SpacyBackend)
    |
[PEF State] -------- Persistent Existence Framework: session-scoped ground truth
    |
[LLM] -------------- any provider (Anthropic, OpenAI, Ollama, Grok, Azure, ...)
    |
[Verification] ----- compare LLM output against PEF: flag hallucinations, contradictions,
    |                 identity drift, time smear, medical/legal/financial violations
    |
[Governor] -------- policy decision: (lens_status, domain, authority_class) ->
    |                 allowed continuation set + pathway_id
    |
[Forensic Audit] --- tamper-evident, hash-chained JSONL ledger (AFL-JSONL-1)
    |
Output

Lens determines admissibility. The Governor determines lawful continuation. The LLM never self-governs.

How releases are framed

Surface Role
PyPI (aurora-lens) Usable runtime governance toolkit -- integrate Lens, verification, Governor, audit.
aurora-lens-eval Black-box behavioural proof -- real stack, scripted upstream, strict pass/fail checks (no canned triumph narrative).
Docs / site Architecture, doctrine, evidence -- runbooks, capabilities, forensic model (docs/, Zenodo).

See docs/DISTRIBUTION.md.

Install

pip install aurora-lens
python -m spacy download en_core_web_sm

Optional extras:

pip install "aurora-lens[claude]"    # Anthropic Claude adapter
pip install "aurora-lens[proxy]"     # Proxy server (uvicorn + YAML)
pip install "aurora-lens[langchain]" # LangChain integration
pip install "aurora-lens[all]"       # All extras

Black-box mini-evaluator (no live LLM; real governance stack):

aurora-lens-eval --strict          # four scenarios: pass, ambiguity gate, hard stop, audit crypto checks
aurora-lens-eval --json            # machine-readable report

Operator surface (four commands)

# Interactive governance — type prompts, see governance decisions live
aurora-lens chat

# Governed proxy — OpenAI-compatible endpoint with forensic audit
aurora-lens proxy

# Batch evaluation — run your own scenarios through the real pipeline
aurora-lens batch --input scenarios.jsonl --output results.jsonl --audit audit.jsonl

# Audit verification — verify a ledger chain independently
aurora-lens verify-audit --ledger audit.jsonl

All four commands use the real pipeline: real SpacyBackend extraction, real Governor policy resolution (CanonicalScannerGateBridge), real forensic audit.

Batch mode (evaluator entry point)

Run any scenario file through the full governance pipeline without standing up the proxy:

# Create scenarios.jsonl
echo '{"user": "Alice is the CEO of TechCorp."}' > scenarios.jsonl
echo '{"user": "Does Alice work at Google?", "expected_action": "HARD_STOP"}' >> scenarios.jsonl
echo '{"user": "I have high blood pressure. Should I take 10mg lisinopril daily?", "expected_action": "HARD_STOP", "expected_pathway": "P_STOP_REDIRECT_QUALIFIED"}' >> scenarios.jsonl

# Run (requires OPENAI_API_KEY or ANTHROPIC_API_KEY)
aurora-lens batch --input scenarios.jsonl --output results.jsonl --audit batch_audit.jsonl

# Each output line:
# {"session_id": "...", "turn": 1, "action": "HARD_STOP",
#  "pathway_id": "P_STOP_REDIRECT_QUALIFIED",
#  "flags": ["MEDICAL_DOSAGE_RECOMMENDATION"],
#  "governed_response": "Please speak with a pharmacist or prescribing clinician.",
#  "original_response": "...", "expected_hit": true}

Session continuity: records sharing a session_id share PEF state and history across turns. See examples/batch_scenarios.jsonl for a reference scenario set.

Quick start (library)

import asyncio
from aurora_lens.lens import Lens
from aurora_lens.config import LensConfig
from aurora_lens.adapters.claude import ClaudeAdapter

config = LensConfig(adapter=ClaudeAdapter(api_key="..."))
lens = Lens(config)

result = await lens.process("Emma has a red book.")
result = await lens.process("What does Emma have?")

print(result.response)       # governed response
print(result.flags)          # verification flags (empty = clean)
print(result.action)         # PASS | FORCE_REVISE | HARD_STOP | CONTAIN
print(result.pef_snapshot)   # current ground-truth state
if result.decision:
    print(result.decision.pathway_id)        # e.g. P_STOP_REDIRECT_QUALIFIED
    print(result.decision.governed_response) # exact text returned to caller
    print(result.decision.commitment_closed) # True when no determination may be issued

Proxy (OpenAI-compatible endpoint)

aurora-lens.yaml:

upstream:
  provider: anthropic          # or openai / any openai-compatible
  api_key: ${ANTHROPIC_API_KEY}
  model: claude-sonnet-4-5-20250929

listen:
  host: 127.0.0.1
  port: 8080

governance:
  default_policy: strict
  audit_log: ./audit.jsonl
aurora-lens proxy
curl http://127.0.0.1:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model": "claude-sonnet-4-5-20250929",
       "messages": [{"role": "user", "content": "Alice is a nurse."}]}'

Response includes governance metadata under aurora:

{
  "choices": [{"message": {"role": "assistant", "content": "..."}}],
  "aurora": {"governance": "PASS", "turn": 1, "session_id": "...", "audit_id": "..."}
}

Any flagged response includes forensic_event with trace ID, state hash, and failed constraints.

Other providers (Ollama, Grok, Azure, Gemini): use provider: openai with the appropriate base_url.

Governance decisions

Action Meaning
PASS Admissible — no flags above threshold
FORCE_REVISE Blocked at REFUSE level — governed response rendered
HARD_STOP Blocked at STOP level — terminal, forensically logged
CONTAIN Clarification needed (ambiguous referent, missing fact)

Continuation pathways (Governor)

Pathway Trigger class interaction_open
P_ADMIT_STANDARD Clean, domain-authorized true
P_ASK_DISAMBIGUATE Unresolved referent true
P_ASK_MISSING_FACT Underdetermined query true
P_REFUSE_EXPLAIN_REDIRECT Epistemic failure true
P_REFUSE_ESCALATE_PRO Professional boundary true
P_STOP_REDIRECT_QUALIFIED Medical/legal/financial advice true
P_STOP_ESCALATE Generic medical stop false
P_STOP_ESCALATE_EMERGENCY Emergency triage dismissal false
P_STOP_SUPPORTIVE_DEESCALATE Self-harm instruction true
P_STOP_REFUSE_CLEAN Illegal instruction false
P_STOP_TERMINAL Terminal stop false
P_STOP_FORENSIC Forensic terminal false

Flags detected

Epistemic (axis 1): UNBOUND_ENTITY, UNSUPPORTED_ATTRIBUTE, UNSUPPORTED_EVENT, UNVERIFIED_FACT_ASSERTION, TIME_SMEAR, CONTRADICTED_FACT, UNVERIFIED_REGULATORY_CLAIM, IDENTITY_DRIFT

Structural (axis 2): UNRESOLVED_REFERENT, UNRESOLVED_COMPARAND, EXTRACTION_EMPTY, EXTRACTION_FAILED

Normative (axis 3): MEDICAL_DOSAGE_RECOMMENDATION, PEDIATRIC_DOSAGE_RECOMMENDATION, NUMERIC_MEDICAL_INSTRUCTION, EMERGENCY_TRIAGE_GUIDANCE, SELF_HARM_INSTRUCTION, ILLEGAL_INSTRUCTION, TARGETED_DEFAMATION, SENSITIVE_PII_EXPOSURE, PERSONALIZED_MEDICAL_ADVICE, PERSONALIZED_LEGAL_ADVICE, PERSONALIZED_FINANCIAL_ADVICE

Forensic audit

Every non-PASS turn writes a tamper-evident ledger entry (AFL-JSONL-1):

{
  "v": 1, "kind": "aurora.event", "op": "HARD_STOP",
  "payload": {
    "data": {
      "action": "HARD_STOP", "pathway_id": "P_STOP_REDIRECT_QUALIFIED",
      "governed_response": "Please speak with a pharmacist...",
      "original_response": "You should take 500mg...",
      "forensic_event": {"trace_id": "...", "state_hash": "...", "event_hash": "sha256:..."}
    }
  },
  "prev": "h:sha256:...", "cid": "cid:fnv64:...", "hash": "h:sha256:..."
}

Verify chain integrity:

aurora-lens verify-audit --ledger audit.jsonl

Project structure

aurora_lens/
  config.py             LensConfig
  lens.py               Lens orchestrator (sandwich pipeline)
  pef/                  Persistent Existence Framework
    state.py            PEFState
    span.py             Temporal span (PRESENT / PAST)
  interpret/            Extraction layer (text -> PEF deltas)
    spacy_backend.py    spaCy extraction (real, local, deterministic)
    llm_backend.py      LLM extraction fallback
    pef_updater.py      Apply extraction to PEF
  verify/               Verification layer (LLM output vs PEF)
    checker.py          Flag generator (22 flag types, 3 axes)
    flags.py            Flag, FlagType
  govern/               Governance engine
    canonical_bridge.py Governor: CanonicalScannerGateBridge (sole authority)
    bridge.py           Bridge ABC + renderers
    decision.py         GovernanceDecision
    adapters/           Context resolution, policy projection
  proxy/                OpenAI-compatible proxy
    app.py              FastAPI application
    config.py           ProxyConfig (YAML + env)
  scripts/
    batch.py            Batch governance runner
    chat.py             Interactive CLI
governor/
  policy_matrix.json    50-row policy matrix (all domains x authority x status)
  models.py             LensStatus, ContinuationPathway, GovernorPolicy enums
  resolver.py           PolicyResolver (versioned, operator-configurable)
docs/                   Technical documentation
examples/               Reference YAML configs and scenario files
tests/                  800+ deterministic tests

Documentation

Requirements

  • Python 3.10+
  • en_core_web_sm spaCy model (python -m spacy download en_core_web_sm)
  • OPENAI_API_KEY or ANTHROPIC_API_KEY for the LLM (any OpenAI-compatible provider works)

License

See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aurora_lens-2.0.0.tar.gz (340.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aurora_lens-2.0.0-py3-none-any.whl (191.7 kB view details)

Uploaded Python 3

File details

Details for the file aurora_lens-2.0.0.tar.gz.

File metadata

  • Download URL: aurora_lens-2.0.0.tar.gz
  • Upload date:
  • Size: 340.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for aurora_lens-2.0.0.tar.gz
Algorithm Hash digest
SHA256 ee50131c258922d78bc4a31786beb428d1cfa91bd54e6df5b21e2902ba414d36
MD5 be9e8fc5dcdb322942d5ee0952519762
BLAKE2b-256 b0afaeec2f6de23e9d10b200180552382df5fa1f06d695f719eeb6db34338e24

See more details on using hashes here.

File details

Details for the file aurora_lens-2.0.0-py3-none-any.whl.

File metadata

  • Download URL: aurora_lens-2.0.0-py3-none-any.whl
  • Upload date:
  • Size: 191.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for aurora_lens-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 566771d8ecc28d345cc42f0a405ecfc179595a97449d775c199b6a2ccb8cad5d
MD5 d60158856b518990e23c05e8cbb5f538
BLAKE2b-256 11cad386515dfd6a4b24592b0b9fca69e88b65d27bcc29e24df7c225f278cbfc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page