Inference-time governance layer for LLMs — PEF world-state, claim verification, forensic audit
This project has been archived.
The maintainers of this project have marked this project as archived. No new releases are expected.
Project description
aurora-lens
Governance substrate that sits between your application and any LLM. It maintains ground-truth state (PEF) independently of the model, catches hallucinations, contradictions, identity drift, and time-smear before they reach your users.
The LLM stays fluent. aurora-lens keeps it honest.
How it works
User Input
|
[Interpretation] ── extract entities, relationships, temporal signals
| (pluggable: spaCy, LLM, or custom backend)
|
[PEF State] ─────── persistent world model (survives across turns)
|
[LLM] ──────────── any provider (Anthropic, OpenAI, etc.)
|
[Verification] ──── compare LLM output against PEF ground truth
|
[Governance] ────── policy decision: PASS / SOFT_CORRECT / FORCE_REVISE / HARD_STOP
|
Output
The fluency LLM never verifies its own output. Interpretation and verification always run on a separate path.
Install
pip install aurora-lens
Optional backends:
pip install "aurora-lens[spacy]" # spaCy extraction backend
pip install "aurora-lens[claude]" # Anthropic Claude adapter
pip install "aurora-lens[proxy]" # Proxy server (uvicorn + YAML config)
pip install "aurora-lens[langchain]" # LangChain integration
pip install "aurora-lens[all]" # Everything
Demo (one command)
See governance in action in under 2 minutes:
pip install "aurora-lens[spacy]"
aurora-lens demo # Run demo (requires OPENAI_API_KEY or ANTHROPIC_API_KEY)
aurora-lens chat # Interactive chat — type, model replies, inspect governance
aurora-lens proxy # Run governed proxy server
Windows (PowerShell): scripts/demo.ps1 — fresh audit file per run, 2-turn flow forcing intervention, prints governance outcomes, audit verify, and pef_snapshot.relationships (provenance + extractor_backend). Requires pip install aurora-lens[proxy,spacy] and API key.
Quick start (library)
from aurora_lens.lens import Lens
from aurora_lens.config import LensConfig
from aurora_lens.adapters.claude import ClaudeAdapter
config = LensConfig(adapter=ClaudeAdapter(api_key="..."))
lens = Lens(config)
result = await lens.process("Emma has a red book.")
result = await lens.process("What does Emma have?")
print(result.response) # LLM's answer
print(result.flags) # Verification flags (empty = clean)
print(result.action) # PASS, SOFT_CORRECT, FORCE_REVISE, or HARD_STOP
print(result.pef_snapshot) # Current ground-truth state
Quick start (proxy server)
The proxy exposes an OpenAI-compatible /v1/chat/completions endpoint with governance applied transparently. Point any OpenAI-compatible client at it.
1. Create aurora-lens.yaml
For Anthropic:
upstream:
provider: anthropic
api_key: ${ANTHROPIC_API_KEY}
model: claude-sonnet-4-5-20250929
listen:
host: 127.0.0.1
port: 8080
governance:
default_policy: strict
audit_log: ./audit.jsonl
Demo tip: With
audit_log: null, theAurora-Audit-Sinkheader showsnone. For demos, setaudit_log: ./audit.jsonl(or a path) so the sink appears active.
For OpenAI:
upstream:
provider: openai
base_url: https://api.openai.com/v1
api_key: ${OPENAI_API_KEY}
model: gpt-4
listen:
host: 127.0.0.1
port: 8080
governance:
default_policy: strict
audit_log: ./audit.jsonl
Other providers (Grok, Gemini, Ollama, Azure, etc.): Use provider: openai with base_url pointing at any OpenAI-compatible API. No extra code or dependencies.
upstream:
provider: openai
base_url: https://api.x.ai/v1 # Grok (xAI)
api_key: ${XAI_API_KEY}
model: grok-2
upstream:
provider: openai
base_url: http://localhost:11434/v1 # Ollama (local)
api_key: "" # not required for local
model: llama3
2. Start the proxy
aurora-lens proxy
Output:
Upstream provider: anthropic | model: claude-sonnet-4-5-20250929 | listen: 127.0.0.1:8080
3. Send requests
curl http://127.0.0.1:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "claude-sonnet-4-5-20250929",
"messages": [{"role": "user", "content": "Emma has a red book."}]
}'
Governance metadata is returned in the response body under aurora:
{
"choices": [{"message": {"role": "assistant", "content": "..."}}],
"aurora": {"governance": "PASS", "turn": 1, "session_id": "...", "audit_id": "..."}
}
Streaming (stream: true)
When the request includes "stream": true, the proxy returns a Server-Sent Events (SSE) stream. The exact message order is:
-
Content chunks — One or more
data:events in OpenAI streaming format. Each event ends with\n\n:data: {"choices":[{"delta":{"content":"..."},"index":0}]}Each event is a JSON object with
choices[].delta.contentcontaining the incremental text. -
Aurora metadata — Final event before
[DONE]:data: {"aurora":{"governance":"PASS","turn":1,"session_id":"session-abc123","audit_id":"..."}}Fields:
governance,turn,session_id,audit_id; optionallyunverified(when SOFT_CORRECT),forensic_event(when FORCE_REVISE or HARD_STOP),stream_truncated,stream_dropped_chars(when accumulator truncated). -
Terminator:
data: [DONE]
If the client disconnects mid-stream, no metadata is emitted and the upstream stream is closed. The response is not logged/ledgered as completed.
Config resolution
The proxy resolves configuration in this order:
| Priority | Source |
|---|---|
| 1 | YAML file (with ${VAR} expansion) |
| 2 | AURORA_LENS_UPSTREAM_* env vars |
| 3 | Provider-specific env var fallback (OPENAI_API_KEY, ANTHROPIC_API_KEY) |
| 4 | CLI flags (--host, --port) |
If the API key is missing after all resolution, the proxy tells you exactly which env var to set:
Error: ANTHROPIC_API_KEY missing — set it in your environment or in upstream.api_key
LangChain integration
Aurora-lens is an OpenAI-compatible proxy. Drop it in front of any LangChain chain without changing your LangChain code.
pip install "aurora-lens[langchain,proxy]"
aurora-lens proxy # start governed proxy on :8080
from aurora_lens.integrations.langchain import get_governed_chat_openai
llm = get_governed_chat_openai(
base_url="http://localhost:8080/v1",
model="gpt-4",
)
# Drop into any existing LangChain chain — governance applied transparently
response = llm.invoke("What dose of metformin should I take?")
print(response.content) # governed response (HARD_STOP if unsafe)
Every call is intercepted, verified against PEF ground truth, and written to a tamper-evident audit log. The chain sees a normal ChatOpenAI interface. Governance happens at the transport layer.
Documentation
- Operator guide — Installation, configuration, Docker, session backends, audit setup, scaling
- Integration guide — Wire aurora-lens in front of your LLM, OpenAI SDK example, session ID, external flags
- Policy reference — Flag types, axes, strict vs moderate, public vs enterprise
- API reference — Endpoints, request/response schemas, Aurora-* headers
Project structure
aurora_lens/
config.py # LensConfig
lens.py # Lens orchestrator (sandwich pipeline)
pef/ # Persistent Existence Framework (world state)
state.py # PEFState — entity/relationship store
span.py # Temporal span (PRESENT / PAST)
interpret/ # Interpretation layer (text -> PEF deltas)
base.py # ExtractionBackend ABC
schema.py # ExtractedClaim, ExtractionResult
spacy_backend.py # spaCy-based extraction (optional)
llm_backend.py # LLM-based extraction
pef_updater.py # Apply extraction results to PEF state
verify/ # Verification layer (LLM output vs PEF)
checker.py # Flag generator
flags.py # Flag, FlagType
govern/ # Governance engine
decision.py # InterventionAction, GovernanceDecision
policy.py # Strict / moderate policy rules
bridge.py # GovernanceBridge ABC + BuiltinBridge
log_slice.py # Per-trace log buffering, SHA-256 digest, ledger fingerprint
adapters/ # LLM adapters (transport layer)
base.py # LLMAdapter ABC, AdapterResponse
claude.py # Anthropic Claude
proxy/ # HTTP proxy server
config.py # ProxyConfig (YAML + env vars)
app.py # FastAPI application
__main__.py # CLI entry point
openai_compat.py # OpenAI response formatting
session.py # Session manager
logging.py # Structured logging
tests/ # 429 tests
examples/ # Example YAML configs
Governance policies
| Action | What happens |
|---|---|
| PASS | Clean response, no flags |
| SOFT_CORRECT | Response delivered unchanged; correction note in metadata |
| FORCE_REVISE | LLM re-prompted with flag context (max 1 attempt, then escalates) |
| HARD_STOP | Response blocked entirely |
Flags detected: CONTRADICTED_FACT, HALLUCINATED_ENTITY, TIME_SMEAR, IDENTITY_DRIFT, UNRESOLVED_PRONOUN.
Log-slice anchoring (Phase 4.5)
Per-request log buffering produces a tamper-evident fingerprint for audit correlation. No log content is stored in the ledger — only a SHA-256 digest and metadata.
- Buffer caps:
MAX_ENTRIES=1000,MAX_BYTES=256 KiB,MAX_MSG_LEN=1024per message - Truncation: When limits are hit,
log_slice_truncatedandlog_slice_dropped_countare set; digest includes truncation metadata - Ledger fields:
log_digest,log_entry_count,first_timestamp,last_timestamp,log_slice_present(true/false) - Digest consistency: CI test verifies recomputed digest matches stored digest (including when truncated)
The proxy middleware initializes the buffer per request; governance bridges consume it at decision time and add the fingerprint to forensic envelopes and decision entries.
What PEF is
PEF = Persistent Existence Framework.
The rule: the world does not reset between sentences.
Entities, relationships, and commitments continue to exist even when the system is not processing text. New inputs are interpreted as proposed changes to an already-existing world, not as standalone meaning. Pronouns are resolved by checking what entities already exist. If resolution is ambiguous, the system refuses to guess.
Part of the Aurora ecosystem
- aurora-lens (this repo) — Governance substrate. Interpretation + verification + audit.
- aurora-governor — Canonical governance kernel. 28 verifier invariants, hash-chained forensic audit ledger.
- unified_rns_system — Mathematical substrate. RNS addressing, lattice memory, attestation.
Research
Stokes, Margaret, Epistemic Legitimacy as a Governance Layer for Large Language Models: Architecture and Implementation (February 16, 2026). Available at SSRN: https://ssrn.com/abstract=6244239 or http://dx.doi.org/10.2139/ssrn.6244239
Requirements
- Python 3.10+
- httpx >= 0.24
License
Proprietary. Copyright (c) 2025 Margaret Stokes. All rights reserved. See LICENSE.
Author
Margaret Stokes
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file aurora_lens-0.1.6.tar.gz.
File metadata
- Download URL: aurora_lens-0.1.6.tar.gz
- Upload date:
- Size: 177.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3526d5d75fe4ae0b10ec67b62364ed7c110de0e2978bd1affef98faf8cbdde99
|
|
| MD5 |
de985ef06bf4edd8ee86249648a9a03a
|
|
| BLAKE2b-256 |
9b1b4adb095e9d1024b54b4fd0a54a7b3ea4a94e9a329d4ea93faf94ecd47bfe
|
File details
Details for the file aurora_lens-0.1.6-py3-none-any.whl.
File metadata
- Download URL: aurora_lens-0.1.6-py3-none-any.whl
- Upload date:
- Size: 127.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ad4e8d3c3799045b456b2fa1991273f02e3247cc093ad84be99ccd088b14c01e
|
|
| MD5 |
28617b17ebfc321ef7196a4cd22ad475
|
|
| BLAKE2b-256 |
e18ae01517244a7721a11d0b9d72ba52adb2017ceb8de4f48023a8f532a80d03
|