Sacrificial LLM instances as behavioral probes for prompt injection detection
Project description
Little Canary
Sacrificial LLM instances as behavioral probes for prompt injection detection
What it does
- Runs a fast structural filter (regex + decode/recheck for base64, hex, ROT13, reverse encodings)
- Probes raw input with a small sacrificial "canary" model and checks for behavioral compromise
- Returns either block, flag + advisory, or pass depending on mode
When to use
- You run an LLM app or agent and want a lightweight pre-check for prompt injection
- You can tolerate ~250ms additional latency per input
- You want a model-agnostic layer that works with your existing stack
When not to use
- You need formal security guarantees or audited benchmark comparability
- You cannot accept pass-through behavior when the canary is unavailable (see Fail-open design)
Results snapshot
- 99.0% detection on TensorTrust (400 real attacks, Claude Opus), 94.8% with 3B local model
- 0% false positives on 40 realistic customer chatbot prompts
- ~250ms latency per check on consumer hardware
TensorTrust benchmark — see Benchmarks and Limitations for methodology and caveats.
Table of Contents
- Quick Start
- Agent Systems Quick Start
- How It Works
- Deployment Modes
- Fail-open Design
- Benchmark Results
- Integration Examples
- API Quick Reference
- Running the Benchmarks
- Project Structure
- Troubleshooting
- Limitations
- Roadmap
- Contributing
- Citation
- License
Quick Start
# 1. Install Ollama and pull a canary model
ollama pull qwen2.5:1.5b
# 2. Install Little Canary
pip install little-canary
from little_canary import SecurityPipeline
pipeline = SecurityPipeline(canary_model="qwen2.5:1.5b", mode="full")
verdict = pipeline.check(user_input)
if not verdict.safe:
return "Sorry, I couldn't process that request."
# Prepend advisory to your existing system prompt
system = verdict.advisory.to_system_prefix() + "\n" + your_system_prompt
response = your_llm(system=system, messages=[{"role": "user", "content": user_input}])
That's it. Your LLM, your app, your logic. The canary adds a security layer in front.
Agent Systems Quick Start
For modern agent stacks, treat Little Canary as inbound risk sensing, not your only control plane.
Recommended deployment pattern:
- Ingress scan all untrusted text (chat, email, web content, tool output) with
pipeline.check(). - Block/flag using
mode="full"ormode="block"depending on risk tolerance. - Attach advisory (
verdict.advisory.to_system_prefix()) before planner/tool decisions. - Pair with outbound/runtime controls (e.g., command/domain policy monitor) for containment.
Minimal agent wrapper:
verdict = pipeline.check(untrusted_input)
if not verdict.safe:
return {"status": "blocked", "reason": verdict.summary}
guarded_system = verdict.advisory.to_system_prefix() + "\n" + base_system_prompt
return run_agent(system=guarded_system, user_input=untrusted_input)
Little Canary is strongest when paired with runtime enforcement (outbound policy + incident logs), especially for autonomous tool-using agents.
How It Works
User Input --> Structural Filter (1ms) --> Canary Probe (250ms) --> Your LLM
| |
Known patterns Behavioral analysis
(regex + encoding) (did the canary get owned?)
Layer 1: Structural Filter (~1ms) Regex-based detection of known attack patterns, plus decode-then-recheck for base64, hex, ROT13, and reverse-encoded payloads.
Layer 2: Canary Probe (~250ms) Feeds raw input to a small sacrificial LLM (qwen2.5:1.5b by default). Temperature=0 for deterministic output. The canary's response is analyzed for signs of compromise: persona adoption, instruction compliance, system prompt leakage, refusal collapse.
Analysis Layer (pluggable)
- Default: regex-based
BehavioralAnalyzer— fast, zero dependencies - Experimental:
LLMJudge— a second model classifies the canary's output as SAFE/UNSAFE
Advisory System
Suspicious inputs that aren't hard-blocked generate a SecurityAdvisory prepended to your production LLM's system prompt, warning it about detected signals.
Why a sacrificial model?
Every existing defense classifies inputs. Little Canary observes what attacks do to a model and reads the aftermath:
- Llama Guard evaluates content against safety categories. Little Canary detects behavioral compromise, not content safety violations.
- Prompt Guard detects injection patterns in input text. Little Canary uses actual LLM behavioral response rather than input-side classification.
- NeMo Guardrails uses rules and LLM calls to control dialogue flow. Little Canary works with any LLM stack, no framework required.
The canary is deliberately small and weak. It gets compromised by attacks that your production LLM might resist. That's the point — a compromised canary is a strong signal.
Deployment Modes
| Mode | Behavior | Best For |
|---|---|---|
block |
Hard-blocks detected attacks | Customer chatbots, zero-tolerance systems |
advisory |
Never blocks, flags for production LLM | Zero-downtime systems, monitoring |
full |
Blocks obvious attacks, flags ambiguous ones | Agents, email processors, hybrid workflows |
Fail-open Design
[!NOTE] If Ollama is unavailable, the pipeline passes all inputs through unscreened. This is a deliberate availability-over-security tradeoff.
How to operate safely:
- Call
pipeline.health_check()at startup to verify the canary model is reachable - Monitor the
canary_availablefield in health check output - Alert if the canary becomes unavailable in production
Benchmark Results
Tested against an internal red-team suite of 160 adversarial prompts across 9 attack categories, plus a separate false-positive test of 40 realistic chatbot prompts.
| Metric | Value |
|---|---|
| TensorTrust detection rate | 99.0% (400 attacks, Claude Opus as production LLM) |
| 3B local model detection rate | 94.8% (TensorTrust, 400 attacks) |
| Canary standalone block rate | 37% (canary + structural filter alone) |
| False positive rate | 0/40 on realistic chatbot traffic |
| Latency | ~250ms per check |
Detection by category:
| Category | Effective Rate | Attacks |
|---|---|---|
| Role escalation | 90% | 20 |
| Benign wrapper | 70% | 20 |
| Multi-step trap | 70% | 20 |
| Classic injection | 65% | 20 |
| Tool trigger | 65% | 20 |
| Context stuffing | 50% | 20 |
| Encoding/obfuscation | 40% | 20 |
| Paired obvious | — | 10 |
| Paired stealthy | — | 10 |
[!NOTE] TensorTrust validated. v0.2.0 results are benchmarked against 400 real-world TensorTrust attacks. Internal red-team category breakdown above is from the original v0.1.0 test suite. See littlecanary.ai for multi-model comparison.
Integration Examples
Customer Chatbot (Block Mode)
from little_canary import SecurityPipeline
pipeline = SecurityPipeline(canary_model="qwen2.5:1.5b", mode="block")
def handle_message(user_input):
verdict = pipeline.check(user_input)
if not verdict.safe:
return "I'm sorry, I couldn't process that. Could you rephrase?"
return call_your_llm(user_input)
Email Agent (Full Mode)
from little_canary import SecurityPipeline
pipeline = SecurityPipeline(canary_model="qwen2.5:1.5b", mode="full")
def process_email(email_body, sender):
verdict = pipeline.check(email_body)
if not verdict.safe:
quarantine(email_body, sender, verdict.summary)
return
system = verdict.advisory.to_system_prefix() + "\n" + agent_prompt
agent.process(system=system, content=email_body)
See examples/ for complete integration code.
API Quick Reference
from little_canary import SecurityPipeline
# Initialize
pipeline = SecurityPipeline(
canary_model="qwen2.5:1.5b", # any Ollama model
mode="full", # "block", "advisory", or "full"
ollama_url="http://localhost:11434",
canary_timeout=10.0,
)
# Check input
verdict = pipeline.check(user_input)
verdict.safe # bool — is input safe to forward?
verdict.blocked_by # str or None — "structural_filter" or "canary_probe"
verdict.advisory # SecurityAdvisory — flagged signals
verdict.advisory.flagged # bool — were suspicious signals detected?
verdict.advisory.to_system_prefix() # str — prepend to your system prompt
verdict.total_latency # float — seconds
# Health check
health = pipeline.health_check()
health["canary_available"] # bool
Running the Benchmarks
# Red team suite (160 adversarial + 20 safe prompts, live dashboard)
cd benchmarks
python3 red_team_runner.py --canary qwen2.5:1.5b
# Dashboard at http://localhost:8899
# False positive test (40 realistic prompts)
python3 run_fp_test.py
# Full pipeline test (canary + production LLM)
python3 full_pipeline_test.py --canary qwen2.5:1.5b --production gemma3:27b --attacks-only
Project Structure
little-canary/
├── little_canary/ # Core package (pip install .)
│ ├── __init__.py
│ ├── py.typed # PEP 561 type marker
│ ├── structural_filter.py # Layer 1: regex + encoding detection
│ ├── canary.py # Layer 2: sacrificial LLM probe
│ ├── analyzer.py # Behavioral analysis (regex-based)
│ ├── judge.py # LLM judge (experimental, replaces regex)
│ └── pipeline.py # Orchestration + three deployment modes
├── tests/ # Unit tests (pytest, 98%+ coverage)
├── examples/ # Integration examples
├── benchmarks/ # Test suites and dashboard
├── .github/ # CI, issue templates, dependabot
├── pyproject.toml
└── requirements.txt
Troubleshooting
"Cannot connect to Ollama"
- Ensure Ollama is running:
ollama serve(or check withpgrep ollama) - Verify the URL: default is
http://localhost:11434 - Test connectivity:
curl http://localhost:11434/api/tags
"Model not found"
- Pull the model first:
ollama pull qwen2.5:1.5b - The model name must match exactly (e.g.,
qwen2.5:1.5b, notqwen2.5)
High false positive rate
- Use
mode="full"instead ofmode="block"to flag ambiguous inputs as advisories rather than hard-blocking - Check
benchmarks/run_fp_test.pyagainst your traffic patterns
Slow response times
- The default qwen2.5:1.5b targets ~250ms. Set a lower
canary_timeoutto fail fast. - Use
enable_structural_filter=True, enable_canary=Falsefor structural-only mode (~1ms, no LLM required).
Limitations
- TensorTrust validated (v0.2.0). 99.0% on 400 attacks with Claude Opus; 94.8% with 3B model. Garak and HarmBench still pending.
- Multi-model tested (v0.2.0). Performance varies by model — see littlecanary.ai for comparison.
- Regex-based behavioral analysis. The experimental
LLMJudgeis included for higher accuracy. - No production deployment data. All results are from controlled testing.
- Ollama-only. No abstraction layer for other backends yet.
Roadmap
- Benchmark against TensorTrust (99.0% detection, 400 attacks) — Garak and HarmBench still TODO
- LLM judge to replace regex analyzer (higher accuracy)
- Backend abstraction layer (vLLM, llama.cpp, OpenAI-compatible APIs)
- Fine-tuned canary model (increased susceptibility = stronger signal)
- Multi-canary ensemble for higher detection rates
- Agent integration SDK (MCP, LangChain, CrewAI)
Contributing
See CONTRIBUTING.md for development setup and contribution guidelines.
Citation
@software{little_canary,
author = {Bosch, Rolando},
title = {Little Canary: Sacrificial LLM Instances as Behavioral Probes for Prompt Injection Detection},
year = {2026},
url = {https://github.com/roli-lpci/little-canary},
license = {Apache-2.0}
}
License
Apache 2.0 — see LICENSE for details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file little_canary-0.2.1.tar.gz.
File metadata
- Download URL: little_canary-0.2.1.tar.gz
- Upload date:
- Size: 40.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
82f58d7fdc51d882051f4f60f09e970f9f98f0244a73ce59159977207facdbc1
|
|
| MD5 |
6d03e201825f2121d349093847787ec4
|
|
| BLAKE2b-256 |
36c9923bb5305ca18e0bb5f61203b0f8e29386bf002d5335d75f046960b1beca
|
File details
Details for the file little_canary-0.2.1-py3-none-any.whl.
File metadata
- Download URL: little_canary-0.2.1-py3-none-any.whl
- Upload date:
- Size: 30.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.9.6
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0535b7adcee5ecbc03209dda3ec348cd57a2fece891a1ab08cd4587ef4ca5fe1
|
|
| MD5 |
fb1885d8e5032fbf97c6abf0f4bee9d4
|
|
| BLAKE2b-256 |
ef61d5f35b83447aab743f119159ba72be4c1737c267e55cf67539742ef58027
|