Skip to main content

Inference-time governance layer for LLMs — PEF world-state, claim verification, forensic audit

This project has been archived.

The maintainers of this project have marked this project as archived. No new releases are expected.

Project description

aurora-lens

Governance substrate that sits between your application and any LLM. It maintains ground-truth state (PEF) independently of the model, catches hallucinations, contradictions, identity drift, and time-smear before they reach your users.

The LLM stays fluent. aurora-lens keeps it honest.

How it works

User Input
    |
[Interpretation] ── extract entities, relationships, temporal signals
    |                  (pluggable: spaCy, LLM, or custom backend)
    |
[PEF State] ─────── persistent world model (survives across turns)
    |
[LLM] ──────────── any provider (Anthropic, OpenAI, etc.)
    |
[Verification] ──── compare LLM output against PEF ground truth
    |
[Governance] ────── policy decision: PASS / SOFT_CORRECT / FORCE_REVISE / HARD_STOP
    |
Output

The fluency LLM never verifies its own output. Interpretation and verification always run on a separate path.

Install

pip install aurora-lens

Optional backends:

pip install "aurora-lens[spacy]"     # spaCy extraction backend
pip install "aurora-lens[claude]"    # Anthropic Claude adapter
pip install "aurora-lens[proxy]"     # Proxy server (uvicorn + YAML config)
pip install "aurora-lens[all]"       # Everything

Demo (one command)

See governance in action in under 2 minutes:

pip install "aurora-lens[spacy]"
aurora-lens demo                   # Run demo (requires OPENAI_API_KEY or ANTHROPIC_API_KEY)
aurora-lens chat                   # Interactive chat — type, model replies, inspect governance
aurora-lens proxy                  # Run governed proxy server

Windows (PowerShell): scripts/demo.ps1 — fresh audit file per run, 2-turn flow forcing intervention, prints governance outcomes, audit verify, and pef_snapshot.relationships (provenance + extractor_backend). Requires pip install aurora-lens[proxy,spacy] and API key.

Quick start (library)

from aurora_lens.lens import Lens
from aurora_lens.config import LensConfig
from aurora_lens.adapters.claude import ClaudeAdapter

config = LensConfig(adapter=ClaudeAdapter(api_key="..."))
lens = Lens(config)

result = await lens.process("Emma has a red book.")
result = await lens.process("What does Emma have?")

print(result.response)       # LLM's answer
print(result.flags)          # Verification flags (empty = clean)
print(result.action)         # PASS, SOFT_CORRECT, FORCE_REVISE, or HARD_STOP
print(result.pef_snapshot)   # Current ground-truth state

Quick start (proxy server)

The proxy exposes an OpenAI-compatible /v1/chat/completions endpoint with governance applied transparently. Point any OpenAI-compatible client at it.

1. Create aurora-lens.yaml

For Anthropic:

upstream:
  provider: anthropic
  api_key: ${ANTHROPIC_API_KEY}
  model: claude-sonnet-4-5-20250929

listen:
  host: 127.0.0.1
  port: 8080

governance:
  default_policy: strict
  audit_log: ./audit.jsonl

Demo tip: With audit_log: null, the Aurora-Audit-Sink header shows none. For demos, set audit_log: ./audit.jsonl (or a path) so the sink appears active.

For OpenAI:

upstream:
  provider: openai
  base_url: https://api.openai.com/v1
  api_key: ${OPENAI_API_KEY}
  model: gpt-4

listen:
  host: 127.0.0.1
  port: 8080

governance:
  default_policy: strict
  audit_log: ./audit.jsonl

Other providers (Grok, Gemini, Ollama, Azure, etc.): Use provider: openai with base_url pointing at any OpenAI-compatible API. No extra code or dependencies.

upstream:
  provider: openai
  base_url: https://api.x.ai/v1          # Grok (xAI)
  api_key: ${XAI_API_KEY}
  model: grok-2
upstream:
  provider: openai
  base_url: http://localhost:11434/v1    # Ollama (local)
  api_key: ""                             # not required for local
  model: llama3

2. Start the proxy

aurora-lens proxy

Output:

Upstream provider: anthropic | model: claude-sonnet-4-5-20250929 | listen: 127.0.0.1:8080

3. Send requests

curl http://127.0.0.1:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-sonnet-4-5-20250929",
    "messages": [{"role": "user", "content": "Emma has a red book."}]
  }'

Governance metadata is returned in the response body under aurora:

{
  "choices": [{"message": {"role": "assistant", "content": "..."}}],
  "aurora": {"governance": "PASS", "turn": 1, "session_id": "...", "audit_id": "..."}
}

Streaming (stream: true)

When the request includes "stream": true, the proxy returns a Server-Sent Events (SSE) stream. The exact message order is:

  1. Content chunks — One or more data: events in OpenAI streaming format. Each event ends with \n\n:

    data: {"choices":[{"delta":{"content":"..."},"index":0}]}
    
    

    Each event is a JSON object with choices[].delta.content containing the incremental text.

  2. Aurora metadata — Final event before [DONE]:

    data: {"aurora":{"governance":"PASS","turn":1,"session_id":"session-abc123","audit_id":"..."}}
    
    

    Fields: governance, turn, session_id, audit_id; optionally unverified (when SOFT_CORRECT), forensic_event (when FORCE_REVISE or HARD_STOP), stream_truncated, stream_dropped_chars (when accumulator truncated).

  3. Terminator:

    data: [DONE]
    
    

If the client disconnects mid-stream, no metadata is emitted and the upstream stream is closed. The response is not logged/ledgered as completed.

Config resolution

The proxy resolves configuration in this order:

Priority Source
1 YAML file (with ${VAR} expansion)
2 AURORA_LENS_UPSTREAM_* env vars
3 Provider-specific env var fallback (OPENAI_API_KEY, ANTHROPIC_API_KEY)
4 CLI flags (--host, --port)

If the API key is missing after all resolution, the proxy tells you exactly which env var to set:

Error: ANTHROPIC_API_KEY missing — set it in your environment or in upstream.api_key

Documentation

  • Operator guide — Installation, configuration, Docker, session backends, audit setup, scaling
  • Integration guide — Wire aurora-lens in front of your LLM, OpenAI SDK example, session ID, external flags
  • Policy reference — Flag types, axes, strict vs moderate, public vs enterprise
  • API reference — Endpoints, request/response schemas, Aurora-* headers

Project structure

aurora_lens/
  config.py          # LensConfig
  lens.py            # Lens orchestrator (sandwich pipeline)
  pef/               # Persistent Existence Framework (world state)
    state.py         # PEFState — entity/relationship store
    span.py          # Temporal span (PRESENT / PAST)
  interpret/         # Interpretation layer (text -> PEF deltas)
    base.py          # ExtractionBackend ABC
    schema.py        # ExtractedClaim, ExtractionResult
    spacy_backend.py # spaCy-based extraction (optional)
    llm_backend.py   # LLM-based extraction
    pef_updater.py   # Apply extraction results to PEF state
  verify/            # Verification layer (LLM output vs PEF)
    checker.py       # Flag generator
    flags.py         # Flag, FlagType
  govern/            # Governance engine
    decision.py      # InterventionAction, GovernanceDecision
    policy.py        # Strict / moderate policy rules
    bridge.py        # GovernanceBridge ABC + BuiltinBridge
  log_slice.py       # Per-trace log buffering, SHA-256 digest, ledger fingerprint
  adapters/          # LLM adapters (transport layer)
    base.py          # LLMAdapter ABC, AdapterResponse
    claude.py        # Anthropic Claude
  proxy/             # HTTP proxy server
    config.py        # ProxyConfig (YAML + env vars)
    app.py           # FastAPI application
    __main__.py      # CLI entry point
    openai_compat.py # OpenAI response formatting
    session.py       # Session manager
    logging.py       # Structured logging
tests/               # 429 tests
examples/            # Example YAML configs

Governance policies

Action What happens
PASS Clean response, no flags
SOFT_CORRECT Response delivered unchanged; correction note in metadata
FORCE_REVISE LLM re-prompted with flag context (max 1 attempt, then escalates)
HARD_STOP Response blocked entirely

Flags detected: CONTRADICTED_FACT, HALLUCINATED_ENTITY, TIME_SMEAR, IDENTITY_DRIFT, UNRESOLVED_PRONOUN.

Log-slice anchoring (Phase 4.5)

Per-request log buffering produces a tamper-evident fingerprint for audit correlation. No log content is stored in the ledger — only a SHA-256 digest and metadata.

  • Buffer caps: MAX_ENTRIES=1000, MAX_BYTES=256 KiB, MAX_MSG_LEN=1024 per message
  • Truncation: When limits are hit, log_slice_truncated and log_slice_dropped_count are set; digest includes truncation metadata
  • Ledger fields: log_digest, log_entry_count, first_timestamp, last_timestamp, log_slice_present (true/false)
  • Digest consistency: CI test verifies recomputed digest matches stored digest (including when truncated)

The proxy middleware initializes the buffer per request; governance bridges consume it at decision time and add the fingerprint to forensic envelopes and decision entries.

What PEF is

PEF = Persistent Existence Framework.

The rule: the world does not reset between sentences.

Entities, relationships, and commitments continue to exist even when the system is not processing text. New inputs are interpreted as proposed changes to an already-existing world, not as standalone meaning. Pronouns are resolved by checking what entities already exist. If resolution is ambiguous, the system refuses to guess.

Part of the Aurora ecosystem

  • aurora-lens (this repo) — Governance substrate. Interpretation + verification + audit.
  • aurora-governor — Canonical governance kernel. 28 verifier invariants, hash-chained forensic audit ledger.
  • unified_rns_system — Mathematical substrate. RNS addressing, lattice memory, attestation.

Requirements

  • Python 3.10+
  • httpx >= 0.24

License

Proprietary. Copyright (c) 2025 Margaret Stokes. All rights reserved. See LICENSE.

Author

Margaret Stokes

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

aurora_lens-0.1.2.tar.gz (174.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

aurora_lens-0.1.2-py3-none-any.whl (123.5 kB view details)

Uploaded Python 3

File details

Details for the file aurora_lens-0.1.2.tar.gz.

File metadata

  • Download URL: aurora_lens-0.1.2.tar.gz
  • Upload date:
  • Size: 174.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for aurora_lens-0.1.2.tar.gz
Algorithm Hash digest
SHA256 26cb053e426dc4e9ee5bfe7476097f7f07ccc89cdc52b88a309c096064c19f3c
MD5 e5e8724f93a9cbb60237ff617e0238e6
BLAKE2b-256 90120a8e0a65608c4356a38f8966ac50ea24bf1c3434ed897c89ef88a7104b06

See more details on using hashes here.

File details

Details for the file aurora_lens-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: aurora_lens-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 123.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.7

File hashes

Hashes for aurora_lens-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 0b04db24c7afc6c44de9f2ffc50ddb669593deb95012f7603626b5c9a101aaba
MD5 bc78940fee0c5eb4f47f46bffde85940
BLAKE2b-256 4302230cd0c6b33389ce8f6b64fe139b17847cbbd01b1ffbbab0d01f8d09df94

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page