Open-source prompt injection attack console - Test AI systems for prompt injection vulnerabilities
Project description
Judgement OSS
Prompt Injection Attack Console
Test your AI's defenses before someone else does.
Why Judgement?
Your AI chatbot, API, or agent is probably vulnerable to prompt injection. Most are. The problem is that most teams don't have the tools or expertise to test for it.
Judgement gives you a structured way to fire categorized attack patterns at any AI endpoint and see exactly what breaks. No security background required -- the built-in education tab teaches you as you go.
Built by Fallen Angel Systems, the team behind Guardian -- an AI-native prompt injection firewall protecting production LLM deployments.
What's New in v2.0.0
- Attack Presets -- Smoke Test, Full Sweep, Deep Dive, and Critical Only modes for structured testing
- Severity Filter & Search -- Filter patterns by severity level with a real-time search bar
- Per-Category Limits -- Control exactly how many patterns fire per category with pool count indicators
- Custom Patterns -- Build, edit, import, and export your own private pattern library (stored locally)
- Professional Reports -- Generate HTML, Markdown, JSON, and SARIF reports with CWE/OWASP references
- Scan Target -- Auto-detect API format, method, headers, and payload field with one click
- License System -- Activate Elite tier for 34,838+ patterns directly from the CLI
- Full Documentation -- Built-in Docs page with Volt's Red Team Playbook and complete feature reference
- Pattern Submissions -- Submit novel attack patterns to the community library
Quick Start
Install from PyPI (recommended)
pip install fas-judgement
judgement
That's it. Open http://localhost:8668 and start testing.
Or run from source
git clone https://github.com/fallen-angel-systems/fas-judgement-oss.git
cd fas-judgement-oss
pip install -r requirements.txt
python -m judgement.server
Options
judgement --port 9000 # Custom port
judgement --host 127.0.0.1 # Localhost only
judgement --host 0.0.0.0 # Expose to network
Elite License Activation
judgement activate FAS-XXXX-XXXX-XXXX-XXXX # Activate Elite license
judgement status # Check tier and pattern count
judgement deactivate # Revert to free tier
Features
Attack Console
Configure your target (URL, headers, body template), import directly from cURL commands, and fire pattern-based attacks with live streaming results. Use quick presets to structure your approach:
| Preset | What It Does |
|---|---|
| Smoke Test | ~15 patterns, critical+high severity, 1 per category |
| Full Sweep | ~50 patterns, proportional spread across all categories |
| Deep Dive | ~100 patterns, heavy coverage, min 2 per category |
| Critical Only | All critical+high severity patterns, no limits |
Severity Filter & Search
Filter the pattern library by severity (Critical, High, Medium, Low) or use the Critical+High combo for focused testing. Search patterns in real-time to find exactly what you need.
Custom Patterns
Build your own private attack library in the My Patterns tab:
- Add, edit, and delete patterns with category and notes
- Import/export as JSON for backup and sharing
- Include custom patterns in attacks alongside the curated library
- Stored locally in your browser -- never touches any server
- Up to 500 patterns, 10,000 characters each
Professional Reports (Elite)
Generate security assessment reports from any attack session:
| Format | Use Case |
|---|---|
| HTML | Print-ready professional report with executive summary, CWE/OWASP references, and remediation advice |
| Markdown | Bug bounty submissions for HackerOne, Bugcrowd, GitHub Issues, or Jira |
| JSON | Structured data for custom tooling, dashboards, or API consumers |
| SARIF | Upload to GitHub Code Scanning, Azure DevOps, or any SARIF-compatible security dashboard |
Reports include risk ratings, detailed findings with evidence, and prioritized remediation recommendations.
Pattern Browser
Browse, search, and explore attack patterns organized by category in a sortable table view. Each pattern shows ID, category, payload text, and severity level.
Education Tab
New to prompt injection? The built-in education tab covers:
- What prompt injection is and why it matters
- How to find testable AI endpoints
- How to interpret scan results
- Common vulnerability categories explained
No prior security experience needed. The onboarding walkthrough guides you from zero to your first scan.
Documentation
Built-in Docs page with expandable reference sections:
- Red Team Playbook by Volt -- structured methodology for professional AI red teaming
- Getting Started guides for API endpoints and web chatbots
- Attack Console reference with preset explanations
- Pattern categories and tier breakdown
- Verdict classification guide
- Credit protection and MCP integration docs
- Legal and ethics guidelines
- FAQ
Scan Target
Point Judgement at any URL and click Scan. It auto-detects:
- HTTP method (POST, GET, PUT, PATCH)
- Payload field name (message, prompt, input, query, etc.)
- Required headers and auth format
- Response format and streaming support
LLM Verdict (Optional)
Connect a local Ollama instance to get AI-powered classification of responses. More accurate than keyword matching for detecting subtle bypasses where the target complies but wraps it in disclaimers.
Pattern Submissions
Found a novel attack technique? Submit it directly from the Submit Pattern tab. Guardian AI auto-verifies your submission -- if it scores 70%+ confidence and isn't a duplicate, it gets added to the community library.
Session History
All scan sessions and results are stored locally in SQLite. Review past scans, compare results across targets, and track your testing progress.
Built-in Safety
- SSRF Protection -- Target URL validation prevents scanning internal/private networks
- Local-only by default -- Binds to localhost, no accidental exposure
- Zero telemetry -- Nothing phones home, ever
- Auth confirmation -- Warns before firing at authenticated endpoints
- Credit protection -- Configurable pattern limits and auto-stop on consecutive errors
How It Works
+--------------+ +---------------+ +--------------+
| You pick |---->| Judgement |---->| Your AI |
| patterns | | fires them | | endpoint |
+--------------+ +-------+-------+ +-------+------+
| |
+------v-------+ +-------v------+
| Results |<----| Response |
| + Verdict | | captured |
+--------------+ +--------------+
- Configure -- Point Judgement at your AI endpoint (URL + headers + body template)
- Select -- Choose attack presets or pick categories manually with severity filters
- Fire -- Watch results stream in real-time via SSE
- Analyze -- Review responses, optional LLM verdict classifies each result
- Report -- Export findings as HTML, Markdown, JSON, or SARIF
- Fix -- Use the findings to harden your AI's defenses
Configuration
| Variable | Default | Description |
|---|---|---|
--port |
8668 |
Server port |
--host |
127.0.0.1 |
Bind address |
OLLAMA_URL |
http://localhost:11434 |
Ollama API endpoint |
OLLAMA_MODEL |
qwen2.5:14b |
Model for LLM verdict |
Free vs Elite
| Feature | Free | Elite |
|---|---|---|
| Attack console with presets | Yes | Yes |
| Severity filter and search | Yes | Yes |
| Education tab | Yes | Yes |
| Pattern browser | Yes | Yes |
| LLM verdict (Ollama) | Yes | Yes |
| Scan Target auto-detect | Yes | Yes |
| MCP server integration | Yes | Yes |
| Built-in documentation | Yes | Yes |
| Pattern submissions | Yes | Yes |
| Starter patterns | 100 | 34,838+ |
| Custom patterns library | -- | Yes |
| Professional reports (HTML/MD/JSON/SARIF) | Basic MD | Full suite |
| Per-category attack limits | -- | Yes |
| Campaigns | -- | Coming Soon |
| Multi-turn attack chains | -- | Coming Soon |
| Credit protection controls | Yes | Yes |
Contributing
Contributions are welcome! Here's how to help:
- Bug reports -- Open an issue
- Feature requests -- Open an issue with the
enhancementlabel - Pull requests -- Fork, branch, PR. Keep changes focused and include a description.
- Pattern submissions -- Use the Submit Pattern tab in the app to contribute directly
Related Projects
- Guardian -- AI-native prompt injection firewall (defense)
- Judgement Pro -- Full-featured hosted version with all Elite features
License
MIT -- see LICENSE for details.
Built by Fallen Angel Systems
If Judgement found a vulnerability in your AI, imagine what an attacker would find.
DISCLAIMER: This tool is intended for authorized security testing and educational purposes only. Only test systems you own or have explicit written permission to test. Unauthorized access to computer systems is illegal under the Computer Fraud and Abuse Act (CFAA) and equivalent laws worldwide. The authors assume no liability for misuse of this tool.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fas_judgement-2.1.0.tar.gz.
File metadata
- Download URL: fas_judgement-2.1.0.tar.gz
- Upload date:
- Size: 150.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a08727b3af6251e82b06fbe0aaa6856a110c24e9a4b2012d3c896f6fcd8fd0e4
|
|
| MD5 |
69a59c668e78bd91047bffd7c11379fd
|
|
| BLAKE2b-256 |
6628e708455e24b7804a52bed6445125e83e321d9b6a3d9c83c55a8a21e491f9
|
File details
Details for the file fas_judgement-2.1.0-py3-none-any.whl.
File metadata
- Download URL: fas_judgement-2.1.0-py3-none-any.whl
- Upload date:
- Size: 168.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a7b1ff9b6f4de448aaad2d4e7ef4f10a713b06b31da3640163a7eabd64e62d2c
|
|
| MD5 |
d966feca06d39e7533e396de1586c3ac
|
|
| BLAKE2b-256 |
2559d2a0f73fa5271d3ba93c46c5a127f4843a9f2d31246b6bc25b025402f467
|