Real-time LLM hallucination guardrail — NLI + RAG fact-checking with token-level streaming halt
Project description
Director-AI
Real-time LLM hallucination guardrail — NLI + RAG fact-checking with token-level streaming halt
What It Does
Director-AI sits between your LLM and the user. It scores every output for hallucination before it reaches anyone — and can halt generation mid-stream if coherence drops below threshold.
graph LR
LLM["LLM<br/>(any provider)"] --> D["Director-AI"]
D --> S["Scorer<br/>NLI + RAG"]
D --> K["StreamingKernel<br/>token-level halt"]
S --> V{Approved?}
K --> V
V -->|Yes| U["User"]
V -->|No| H["HALT + evidence"]
Three things make it different:
- Token-level streaming halt — not post-hoc review. Severs output the moment coherence degrades.
- Dual-entropy scoring — NLI contradiction detection (DeBERTa) + RAG fact-checking against your knowledge base.
- Your data, your rules — ingest your own documents. The scorer checks against your ground truth.
Scope
100% Python — no compiled extensions required. Works on any platform with Python 3.10+.
| Layer | Packages | Install |
|---|---|---|
| Core (zero heavy deps) | CoherenceScorer, StreamingKernel, GroundTruthStore, SafetyKernel |
pip install director-ai |
| NLI models | DeBERTa, FactCG, MiniCheck, ONNX Runtime | pip install director-ai[nli] |
| Vector DBs | ChromaDB, Pinecone, Weaviate, Qdrant | pip install director-ai[vector] |
| LLM judge | OpenAI, Anthropic escalation | pip install director-ai[openai] |
| Observability | OpenTelemetry spans | pip install director-ai[otel] |
| Server | FastAPI + Uvicorn | pip install director-ai[server] |
Quickstart
| Method | Command |
|---|---|
| pip install | pip install director-ai |
| CLI scaffold | director-ai quickstart --profile medical |
| Colab | |
| HF Spaces | Try it live |
| Docker | docker run -p 8080:8080 ghcr.io/anulum/director-ai:latest |
6-line guard
from director_ai import guard
from openai import OpenAI
client = guard(
OpenAI(),
facts={"refund_policy": "Refunds within 30 days only"},
threshold=0.6,
)
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What is the refund policy?"}],
)
Catch and inspect a halt
from director_ai import guard, HallucinationError
from openai import OpenAI
client = guard(OpenAI(), facts={"policy": "Refunds within 30 days only"})
try:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What is the refund policy?"}],
)
except HallucinationError as exc:
print(f"HALTED: coherence={exc.score.score:.3f}")
print(f"Evidence: {exc.score.evidence}")
Score a response
from director_ai.core import CoherenceScorer, GroundTruthStore
store = GroundTruthStore()
store.add("sky color", "The sky is blue due to Rayleigh scattering.")
scorer = CoherenceScorer(threshold=0.6, ground_truth_store=store)
approved, score = scorer.review("What color is the sky?", "The sky is green.")
print(approved) # False
print(score.score) # 0.42
Streaming halt
from director_ai.core import StreamingKernel
kernel = StreamingKernel(hard_limit=0.4, window_size=5)
session = kernel.stream_tokens(token_generator, lambda tok: my_scorer(tok))
if session.halted:
print(f"Halted at token {session.halt_index}: {session.halt_reason}")
Installation
pip install director-ai # heuristic scoring
pip install director-ai[nli] # NLI model (DeBERTa)
pip install director-ai[vector] # ChromaDB knowledge base
pip install "director-ai[nli,vector,server]" # production stack
Framework integrations: [langchain], [llamaindex], [langgraph], [haystack], [crewai].
Full installation guide: docs.
Docker
docker run -p 8080:8080 ghcr.io/anulum/director-ai:latest # CPU
docker run --gpus all -p 8080:8080 ghcr.io/anulum/director-ai:gpu # GPU
Benchmarks
Accuracy — LLM-AggreFact (29,320 samples)
| Model | Balanced Acc | Params | Latency | Streaming |
|---|---|---|---|---|
| Bespoke-MiniCheck-7B | 77.4% | 7B | ~100 ms | No |
| Director-AI (FactCG) | 75.8% | 0.4B | 14.6 ms | Yes |
| MiniCheck-Flan-T5-L | 75.0% | 0.8B | ~120 ms | No |
| MiniCheck-DeBERTa-L | 72.6% | 0.4B | ~120 ms | No |
75.8% balanced accuracy at 17x fewer params than the leader. 14.6 ms/pair with ONNX GPU batching — faster than every competitor at this accuracy tier. Director-AI's unique value is the system: NLI + KB + streaming halt.
Full results: benchmarks/comparison/COMPETITOR_COMPARISON.md.
Domain Presets
8 built-in profiles with tuned thresholds:
director-ai config --profile medical # threshold=0.75, NLI on, reranker on
director-ai config --profile finance # threshold=0.70, w_fact=0.6
director-ai config --profile legal # threshold=0.68, w_logic=0.6
director-ai config --profile creative # threshold=0.40, permissive
Known Limitations
- Heuristic fallback is weak: Without
[nli], scoring uses word-overlap heuristics (~55% accuracy). Usestrict_mode=Trueto reject (0.9) instead of guessing. - Summarisation is a weak spot: NLI models under-perform on summarisation (AggreFact-CNN: 68.8%, ExpertQA: 59.1%).
- ONNX CPU is slow: 383 ms/pair without GPU. Use
onnxruntime-gpufor production. - Weights are domain-dependent: Default
w_logic=0.6, w_fact=0.4suits general QA. Adjust for your domain. - Chunked NLI: Very short chunks (<3 sentences) may lose context.
- LLM-as-judge sends data externally: When
llm_judge_enabled=True, truncated prompt+response (500 chars) are sent to the configured provider (OpenAI/Anthropic). Do not enable in privacy-sensitive deployments without user consent.
Migrating from 1.x
| 1.x name | 2.x name | Notes |
|---|---|---|
DirectorModule |
CoherenceScorer |
Same API, new name |
BackfireKernel |
SafetyKernel |
Same API, new name |
StrangeLoopAgent |
CoherenceAgent |
Same API, new name |
KnowledgeBase |
GroundTruthStore |
Same API, new name |
MockActor |
MockGenerator |
Same API, new name |
RealActor |
LLMGenerator |
Same API, new name |
Old names still work but emit DeprecationWarning. They will be removed in 3.0.
Breaking changes in 2.3.0:
strict_mode=Truenow rejects (divergence=0.9) when NLI is unavailable, instead of returning neutral 0.5.guard()uses duck-type detection instead of module-name checks. Custom clients that exposeclient.chat.completions.createare now accepted.- Enterprise modules are lazy-loaded since 2.2.0 —
import director_aino longer pulls heavy deps.
Citation
@software{sotek2026director,
author = {Sotek, Miroslav},
title = {Director-AI: Real-time LLM Hallucination Guardrail},
year = {2026},
url = {https://github.com/anulum/director-ai},
version = {2.3.0},
license = {AGPL-3.0-or-later}
}
License
Dual-licensed:
- Open-Source: GNU AGPL v3.0 — research, personal use, open-source projects.
- Commercial: Proprietary license — removes copyleft for closed-source and SaaS.
See Licensing for pricing tiers and FAQ.
Contact: anulum.li/contact | invest@anulum.li
Contributing
See CONTRIBUTING.md. By contributing, you agree to AGPL v3 terms.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file director_ai-2.3.0.tar.gz.
File metadata
- Download URL: director_ai-2.3.0.tar.gz
- Upload date:
- Size: 176.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
13ce7703caac43eadaefda0931097b12e9c8cecac57094780c237e7fc327a6b2
|
|
| MD5 |
5c76eb862f4e97c91f4a819d2b8f0475
|
|
| BLAKE2b-256 |
c00ab393c6c1ef9e6665f5df2f0bfb25d8073be814158c9ae0cd59934682bee1
|
Provenance
The following attestation bundles were made for director_ai-2.3.0.tar.gz:
Publisher:
publish.yml on anulum/director-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
director_ai-2.3.0.tar.gz -
Subject digest:
13ce7703caac43eadaefda0931097b12e9c8cecac57094780c237e7fc327a6b2 - Sigstore transparency entry: 1011049954
- Sigstore integration time:
-
Permalink:
anulum/director-ai@e03d356ba92e5dca7be1873a5ffa121ed6135740 -
Branch / Tag:
refs/tags/v2.3.0 - Owner: https://github.com/anulum
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@e03d356ba92e5dca7be1873a5ffa121ed6135740 -
Trigger Event:
release
-
Statement type:
File details
Details for the file director_ai-2.3.0-py3-none-any.whl.
File metadata
- Download URL: director_ai-2.3.0-py3-none-any.whl
- Upload date:
- Size: 111.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dfe0a433f885c48a5555a52d0aa1c12aa8a074253b5d43aef6bf87d1b5fccb6b
|
|
| MD5 |
6b6459559c15f990b5dcee872caa6544
|
|
| BLAKE2b-256 |
04c5683d36d2959372849124c2b704baca404e3fe0ee858b4b630ef06efa21e4
|
Provenance
The following attestation bundles were made for director_ai-2.3.0-py3-none-any.whl:
Publisher:
publish.yml on anulum/director-ai
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
director_ai-2.3.0-py3-none-any.whl -
Subject digest:
dfe0a433f885c48a5555a52d0aa1c12aa8a074253b5d43aef6bf87d1b5fccb6b - Sigstore transparency entry: 1011049991
- Sigstore integration time:
-
Permalink:
anulum/director-ai@e03d356ba92e5dca7be1873a5ffa121ed6135740 -
Branch / Tag:
refs/tags/v2.3.0 - Owner: https://github.com/anulum
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@e03d356ba92e5dca7be1873a5ffa121ed6135740 -
Trigger Event:
release
-
Statement type: