Real-time LLM hallucination guardrail — NLI + RAG fact-checking with token-level streaming halt
Project description
Director-AI
Real-time LLM hallucination guardrail — NLI + RAG fact-checking with token-level streaming halt
What It Does
Director-AI sits between your LLM and the user. It scores every output for hallucination before it reaches anyone — and can halt generation mid-stream if coherence drops below threshold.
graph LR
LLM["LLM<br/>(any provider)"] --> D["Director-AI"]
D --> S["Scorer<br/>NLI + RAG"]
D --> K["StreamingKernel<br/>token-level halt"]
S --> V{Approved?}
K --> V
V -->|Yes| U["User"]
V -->|No| H["HALT + evidence"]
Ten things make it different:
- Token-level streaming halt — not post-hoc review. Severs output the moment coherence degrades.
- Dual-entropy scoring — NLI contradiction detection (DeBERTa) + RAG fact-checking against your knowledge base.
- Meta-confidence — the guardrail tells you how confident it is in its own verdict. Route low-confidence results to human review.
- Structured output verification — JSON schema validation, tool call fabrication detection, code hallucinated API detection. Zero dependencies (stdlib only).
- Online calibration — collects human feedback, automatically adjusts thresholds for your deployment. The longer you use it, the better it gets.
- Contradiction tracking — detects when an AI contradicts itself across conversation turns.
- EU AI Act compliance — automated Article 15 documentation. Accuracy metrics, drift detection, feedback loop detection, audit trails, per-model breakdown with confidence intervals. Ready for August 2026 enforcement.
- Verification gems — numeric consistency checks, reasoning chain verification, temporal freshness scoring, cross-model consensus, conformal prediction intervals. All stdlib-only, zero dependencies.
- Agentic loop monitor — detects circular tool calls, goal drift, and budget exhaustion in AI agent loops. The first guardrail that monitors agent execution, not just individual calls.
- Adversarial self-test — 25-pattern robustness suite tests your guardrail against zero-width chars, homoglyphs, encoding tricks, and prompt injection.
Scope
Pure Python core — no compiled extensions required. Optional Rust kernel (pip install director-ai[rust]) for SIMD-accelerated scoring. Works on any platform with Python 3.11+.
| Layer | Packages | Install |
|---|---|---|
| Core (zero heavy deps) | CoherenceScorer, StreamingKernel, GroundTruthStore, HaltMonitor |
pip install director-ai |
| NLI models | DeBERTa, FactCG, MiniCheck, ONNX Runtime | pip install director-ai[nli] |
| Vector DBs | ChromaDB ([vector]), Pinecone ([pinecone]), Weaviate ([weaviate]), Qdrant ([qdrant]) |
pip install director-ai[vector] |
| LLM judge | OpenAI, Anthropic escalation | pip install director-ai[openai] |
| Observability | OpenTelemetry spans | pip install director-ai[otel] |
| Server | FastAPI + Uvicorn | pip install director-ai[server] |
Four Ways to Add Guardrails
A: Wrap your SDK (6 lines)
Duck-type detection for five SDK shapes: OpenAI-compatible (OpenAI, vLLM, Groq, LiteLLM, Ollama), Anthropic, AWS Bedrock, Google Gemini, and Cohere.
from director_ai import guard
from openai import OpenAI
client = guard(
OpenAI(),
facts={"refund_policy": "Refunds within 30 days only"},
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What is the refund policy?"}],
)
B: One-shot check (4 lines)
Score a single prompt/response pair without an SDK client:
from director_ai import score
cs = score("What is the refund policy?", response_text,
facts={"refund": "Refunds within 30 days only"},
threshold=0.3)
print(f"Coherence: {cs.score:.3f} Approved: {cs.approved}")
C: Zero code changes (2 lines)
Point any OpenAI-compatible client at the proxy:
pip install director-ai[server]
director-ai proxy --port 8080 --facts kb.txt --threshold 0.3
Then set OPENAI_BASE_URL=http://localhost:8080/v1 in your app. Every response
gets scored; hallucinations are rejected (or flagged with --on-fail warn).
D: FastAPI middleware (3 lines)
Guard your own API endpoints:
from director_ai.integrations.fastapi_guard import DirectorGuard
app.add_middleware(DirectorGuard,
facts={"policy": "Refunds within 30 days only"},
on_fail="reject",
)
Responses on POST endpoints get X-Director-Score and X-Director-Approved
headers. Set paths=["/api/chat"] to limit which endpoints are scored.
Installation
pip install "director-ai[nli]" # recommended — NLI model scoring
pip install "director-ai[nli,vector,server]" # production stack with RAG + REST API
pip install director-ai # heuristic-only (limited accuracy)
Privacy note: The optional LLM judge mode (
llm_judge_enabled=True) sends truncated prompt+response fragments (500 chars) to an external provider (OpenAI or Anthropic). Do not enable in privacy-sensitive deployments without user consent. The default NLI-only mode runs entirely locally with no external calls.
Extras: [vector] (ChromaDB), [finetune] (domain adaptation), [ingestion] (PDF/DOCX parsing), [colbert] (late-interaction retrieval).
Framework integrations: [langchain], [llamaindex], [langgraph], [haystack], [crewai], Semantic Kernel, DSPy/Instructor.
Kubernetes: Helm chart with GPU toggle, HPA, Sigstore-signed releases.
Voice AI: VoiceGuard — real-time token filter for TTS pipelines (guide).
Full installation guide: docs.
Docker
Dockerfile included for self-hosted builds. Pre-built images not yet published to a registry.
docker build -t director-ai . # build locally
docker run -p 8080:8080 director-ai # CPU
docker build -f Dockerfile.gpu -t director-ai:gpu . # GPU build
docker run --gpus all -p 8080:8080 director-ai:gpu # GPU
Benchmarks
Accuracy — LLM-AggreFact (29,320 samples)
Scoring model: yaxili96/FactCG-DeBERTa-v3-Large (0.4B params, MIT license).
| Model | Balanced Acc | Params | Latency | Streaming |
|---|---|---|---|---|
| Bespoke-MiniCheck-7B | 77.4% | 7B | ~100 ms | No |
| Director-AI (FactCG) | 75.8% | 0.4B | 14.6 ms | Yes |
| MiniCheck-Flan-T5-L | 75.0% | 0.8B | ~120 ms | No |
| MiniCheck-DeBERTa-L | 72.6% | 0.4B | ~120 ms | No |
75.8% balanced accuracy comes from the FactCG-DeBERTa-v3-Large model (77.2% in the NAACL 2025 paper; our eval yields 75.86% due to threshold tuning and data split version). Latency: 14.6 ms/pair measured on GTX 1060 6GB with ONNX GPU batching (16-pair batch, 30 iterations, 5 warmup). Director-AI's unique value is the system: NLI + KB + streaming halt.
Full results: benchmarks/comparison/COMPETITOR_COMPARISON.md.
Performance trade-offs and E2E pipeline metrics: docs.
Domain Presets
10 built-in profiles with preset thresholds (starting points — adjust for your data):
director-ai config --profile medical # threshold=0.30, NLI on, reranker on
director-ai config --profile finance # threshold=0.30, w_fact=0.6
director-ai config --profile legal # threshold=0.30, w_logic=0.6
director-ai config --profile creative # threshold=0.40, permissive
Domain-specific benchmark scripts exist but have not yet been validated with measured results. Run them yourself (requires GPU + HuggingFace datasets):
python -m benchmarks.medical_eval # MedNLI + PubMedQA
python -m benchmarks.legal_eval # ContractNLI + CUAD (RAGBench)
python -m benchmarks.finance_eval # FinanceBench + Financial PhraseBank
Known Limitations
- Heuristic fallback is weak: Without
[nli], scoring uses word-overlap heuristics (~55% accuracy). Usestrict_mode=Trueto reject (0.9) instead of guessing. - Summarisation FPR at 10.5%: Reduced from 95% via bidirectional NLI + baseline calibration (v3.5). AggreFact-CNN: 68.8%, ExpertQA: 59.1% (structurally expected at 0.4B params).
- ONNX CPU is slow: 383 ms/pair without GPU. Use
onnxruntime-gpufor production. - Weights are domain-dependent: Default
w_logic=0.6, w_fact=0.4suits general QA. Adjust for your domain or use a built-in profile. - LLM-as-judge sends data externally: When
llm_judge_enabled=True, truncated prompt+response (500 chars) are sent to the configured provider. Do not enable in privacy-sensitive deployments without user consent. - Threshold defaults differ by API surface:
guard()/score()default tothreshold=0.3(permissive).DirectorConfigdefaults tocoherence_threshold=0.6(conservative). Always set the threshold explicitly. - NLI-only scoring needs KB grounding: Without a knowledge base, PubMedQA F1=62.1%, FinanceBench 80%+ FPR. Load your domain facts into the vector store — that's where Director-AI's scoring discriminates well.
- Long documents need ≥16GB VRAM: Legal contracts and SEC filings exceed 6GB during chunked NLI inference.
Citation
@software{sotek2026director,
author = {Sotek, Miroslav},
title = {Director-AI: Real-time LLM Hallucination Guardrail},
year = {2026},
url = {https://github.com/anulum/director-ai},
version = {3.11.0},
license = {AGPL-3.0-or-later}
}
License
Dual-licensed:
- Open-Source: GNU AGPL v3.0 — research, personal use, open-source projects.
- Commercial: Proprietary license — removes copyleft for closed-source and SaaS.
See Licensing for pricing tiers and FAQ.
Contact: anulum.li | director.class.ai@anulum.li
Community
Join the Director-AI Discord for CI notifications, release announcements, and support. The Discord bot also provides /version, /docs, /install, /status, and /quickstart slash commands.
Contributing
See CONTRIBUTING.md. By contributing, you agree to AGPL v3 terms.
Developed by ANULUM / Fortis Studio
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file director_ai-3.11.1.tar.gz.
File metadata
- Download URL: director_ai-3.11.1.tar.gz
- Upload date:
- Size: 511.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4568c55ce89d08c557841aab75712e62fc439d4bf19063416bcd672fbc06881f
|
|
| MD5 |
9b4fb63ef8ecd696d1bc5ab21ada29c7
|
|
| BLAKE2b-256 |
6ac534b866418831c92580544611761c9f609835814674ca30241129c7c8da6c
|
Provenance
The following attestation bundles were made for director_ai-3.11.1.tar.gz:
Publisher:
publish.yml on anulum/director-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
director_ai-3.11.1.tar.gz -
Subject digest:
4568c55ce89d08c557841aab75712e62fc439d4bf19063416bcd672fbc06881f - Sigstore transparency entry: 1189547529
- Sigstore integration time:
-
Permalink:
anulum/director-ai@7d16a407ece67995a7ab27363584d1f283726e9b -
Branch / Tag:
refs/heads/main - Owner: https://github.com/anulum
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@7d16a407ece67995a7ab27363584d1f283726e9b -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file director_ai-3.11.1-py3-none-any.whl.
File metadata
- Download URL: director_ai-3.11.1-py3-none-any.whl
- Upload date:
- Size: 300.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
fdb02a6acf21c4862e536a05441673389e5e3f1f047f721bec3e1b09d82d4caa
|
|
| MD5 |
64e6569978ad6a6fa0ff6e9928be6805
|
|
| BLAKE2b-256 |
d139127b2389af095cabb3cbb6b2d82816d80ccce910e8d4642e72fcabe52a6e
|
Provenance
The following attestation bundles were made for director_ai-3.11.1-py3-none-any.whl:
Publisher:
publish.yml on anulum/director-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
director_ai-3.11.1-py3-none-any.whl -
Subject digest:
fdb02a6acf21c4862e536a05441673389e5e3f1f047f721bec3e1b09d82d4caa - Sigstore transparency entry: 1189547530
- Sigstore integration time:
-
Permalink:
anulum/director-ai@7d16a407ece67995a7ab27363584d1f283726e9b -
Branch / Tag:
refs/heads/main - Owner: https://github.com/anulum
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@7d16a407ece67995a7ab27363584d1f283726e9b -
Trigger Event:
workflow_dispatch
-
Statement type: