Real-time AI threat monitoring. Protect your apps from prompt injection, leaks, and attacks in just a few lines of code.
Project description
SecureVector
AI Firewall for Agents — Block prompt injection, tool abuse, and data leaks before and after the LLM.
Protect your AI agents, track costs, and set budget limits — no coding required. Download the app or install with pip.
▶ Watch the Demo
Threat detection, tool permissions, and cost tracking — running locally in real time.
🆕 New in v3.2.0:
- Skill Scanner — static analysis for AI agent skills with optional AI-powered review
- Skill Scan Policy Engine — risk scoring, trusted publishers, and per-category allow/block rules
- Tool Permissions — allow/block agent tool calls
- Cost Tracking & Budget Limits — per-agent spend tracking and global daily budget
- 28 new threat detection rules
How It Works
SecureVector sits between your AI agent and the LLM provider, scanning every request and response for security threats, controlling tool permissions, and tracking spend in real time. Runs entirely on your machine — nothing leaves your infrastructure.
| The Problem | The Fix |
|---|---|
|
AI agents are powerful — and completely unprotected. Every prompt your AI agent sends, every secret it handles, every piece of user data — goes straight to the LLM provider with nothing in between. No spend limit. No injection protection. No audit trail. You're flying blind. |
SecureVector runs on your machine, between your AI agents and LLM providers. It starts with a multi-provider proxy mode for routing across OpenAI, Anthropic, Ollama, and more — all through a single endpoint. It blocks threats, enforces tool permissions, and hard-stops agents that blow their budget. 100% local. No accounts. |
Quick Start
Step 1 — Install or download
pip install securevector-ai-monitor[app]
securevector-app --web
Or download the app: Windows · macOS · Linux · DEB · RPM
Step 2 — Open the app
Open http://localhost:8741 in your browser, or double-click the installed binary.
Step 3 — Connect your agent
Go to the Integrations tab for step-by-step setup guides — OpenClaw, LangChain, CrewAI, LangGraph, n8n, Ollama, OpenAI, Anthropic, and more.
If the app fails to launch because ports 8741/8742 are already in use, use --port <port> of your choice — the proxy starts automatically on port+1.
See Configuration for proxy or web/api port settings.
Open-source. 100% local by default. No API keys required.
Screenshots
All screenshots are from a local app instance.
Tool Call History — 305 calls, 158 blocked: bash rm -rf, gmail_send to attacker, use_aws_cli stopped |
Tool Permissions — allow or block tools by name or category |
Tool Call Detail — decision, tool, args, and timestamp for every call |
Dashboard — threat counts, cost metrics, and tool permission status |
LLM Cost Tracker — per-agent spend, budgets, and token breakdown |
Custom Rules — create and manage detection rules by category and severity |
Skill Scanner — static security analysis for AI agent skills with scan history and risk levels |
Skill Policy — network permissions, trusted publishers, and policy thresholds |
What You Get
| Threat Protection | Cost Control |
|---|---|
|
Scans every prompt and response for prompt injection, jailbreaks, PII leaks, and tool abuse. 50+ detection rules covering the OWASP LLM Top 10. Detects and logs threats by default — enable block mode when you're ready to hard-stop them. |
Tracks every token and dollar per agent in real time. Set daily budget limits — requests auto-stop when the cap is hit. Never wake up to a surprise bill. |
| Skill Scanner | Full Visibility |
|
Scan agent skills and tool packages before installing. Static analysis across 10 categories detects shell access, network calls, env var reads, and more. Optional AI review filters false positives automatically. |
Live dashboard showing every LLM request, tool call, token count, and threat event. See exactly what your agents are doing. |
| 100% Local | |
|
Runs entirely on your machine. No accounts. No cloud. No data leaves your infrastructure. Open source under Apache 2.0. |
|
Features
| Section | Feature | Description |
|---|---|---|
| Monitor | Threat Monitor | Live feed of every detected threat — prompt injection, jailbreaks, data leaks, tool abuse |
| Tool Activity | Full audit log of every tool call your agents make, with args, decision, and timestamp | |
| Cost Tracking | Per-agent, per-model token spend and USD cost in real time, with request history | |
| Scan | Skill Scanner | Static analysis of AI agent skills — detects shell exec, network access, env var reads, code injection, and 6 more categories |
| AI Review | Optional LLM-powered false-positive filtering — works with OpenAI, Anthropic, Ollama, Azure, Bedrock | |
| Scan Policy | Risk scoring with per-category allow/block rules, trusted publishers, and severity thresholds | |
| Configure | Tool Permissions | Allow or block specific tools by name or category — per agent, per rule |
| Cost Settings | Set daily budget limits and choose whether to warn or hard-block at the cap | |
| Rules | Custom detection rules — auto-block or alert on threats matching your criteria |
Performance: Rule-based analysis (default) adds ~10–50ms per request. Enabling optional AI analysis adds 1–3s per request depending on the model and provider — this is shown on the dashboard so you can measure it against your actual traffic.
Why SecureVector?
| ❌ Without SecureVector | ✅ With SecureVector |
|---|---|
| Prompt injections pass straight through | Detected and logged by default; blocked when you enable block mode |
| API keys and PII leak in prompts | Automatically redacted |
| No control over what tools agents can use | Fine-grained allow/block rules per tool |
| No audit trail of tool calls | Full tool call history with decisions and reasons |
| No idea what agents are spending | Real-time cost tracking per agent |
| Runaway agents burn through your API budget overnight | Hard budget limits with auto-stop |
| Zero visibility into agent traffic | Live dashboard showing everything |
Works With Everything
Your AI Stack
LangChain · LlamaIndex · CrewAI · AutoGen · LangGraph · n8n · Dify · OpenClaw/ClawdBot (LLM gateway agent framework) — or any framework that makes HTTP calls to an LLM provider.
LLM Providers
OpenAI · Anthropic · Ollama · Groq · and any OpenAI-compatible API.
Run Anywhere
| Environment | Details |
|---|---|
| Local | macOS, Linux, Windows |
| Cloud | AWS, GCP, Azure |
| Containers | Docker & Kubernetes |
| Virtual Machines | EC2, Droplets, VMs |
| Edge / Serverless | Lambda, Workers, Vercel |
Agent Integrations
| Agent/Framework | Integration |
|---|---|
| LangChain | LLM Proxy or SDK Callback |
| LangGraph | LLM Proxy or Security Node |
| CrewAI | LLM Proxy or SDK Callback |
| Any OpenAI-compatible | LLM Proxy — see Integrations in UI |
| OpenClaw / ClawdBot (LLM gateway agent) | LLM Proxy — see Integrations in UI |
| n8n | Community Node |
| Claude Desktop | MCP Server Guide |
| Any OpenAI-compatible app | LLM Proxy — set OPENAI_BASE_URL to proxy |
| Any HTTP Client | POST http://localhost:8741/analyze with {"text": "..."} |
What It Detects
| Input Threats (User to LLM) | Output Threats (LLM to User) |
|---|---|
| Prompt injection | Credential leakage (API keys, tokens) |
| Jailbreak attempts | System prompt exposure |
| Data exfiltration requests | PII disclosure (SSN, credit cards) |
| Social engineering | Jailbreak success indicators |
| SQL injection patterns | Encoded malicious content |
| Tool result injection (MCP) | — |
| Multi-agent authority spoofing | — |
| Permission scope escalation | — |
Full coverage: OWASP LLM Top 10
AI Agent Attack Protection (28 new rules · 72 total)
Built from real attack chains observed against production agent frameworks:
- Tool Result Injection — injected instructions hidden inside MCP tool responses
- Multi-Agent Authority Spoofing — impersonating trusted agents in multi-agent pipelines
- Permission Scope Escalation — agents requesting more permissions than granted
- MCP Tool Call Injection — malicious payloads delivered through MCP tool calls
- Evasion techniques (22 rules) — zero-width characters, encoding tricks, roleplay framing, leetspeak, semantic inversion, emotional manipulation, and more
Skill Scanner
Scan AI agent skills and tool packages before you install them. SecureVector performs static analysis across 10 detection categories, assigns a risk score, and optionally runs an AI review to filter false positives.
┌──────────────────────────────────────────────────────────────────────┐
│ Skill Scanner Flow │
│ │
│ ┌─────────────┐ ┌──────────────────┐ ┌────────────────┐ │
│ │ Skill Dir │────>│ Static Analysis │────>│ Risk Scoring │ │
│ │ or URL │ │ (10 categories) │ │ LOW/MED/HIGH │ │
│ └─────────────┘ └──────────────────┘ └───────┬────────┘ │
│ │ │
│ ┌───────────────────────── │ │
│ v v │
│ ┌─────────────────┐ ┌────────────────────┐ │
│ │ AI Review │ │ Policy Engine │ │
│ │ (optional LLM) │ │ allow/block rules │ │
│ │ FP filtering │ │ trusted publishers│ │
│ └────────┬────────┘ └─────────┬──────────┘ │
│ │ │ │
│ v v │
│ ┌──────────────────────────────────────┐ │
│ │ Verdict: PASS / WARN / BLOCK │ │
│ │ + detailed findings per category │ │
│ └──────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────────────┘
Detection Categories
| Category | What It Finds |
|---|---|
shell_exec |
Subprocess calls, system commands |
network_domain |
HTTP requests, socket connections, DNS lookups |
env_var_read |
Access to environment variables (API keys, secrets) |
code_exec |
eval, dynamic code generation |
dynamic_import |
Runtime module loading |
file_write |
Writing to disk outside expected paths |
base64_literal |
Obfuscated payloads in base64 strings |
compiled_code |
.pyc, .so, .dll binaries embedded in the skill |
symlink_escape |
Symlinks pointing outside the skill directory |
missing_manifest |
No permissions.yml declaring required capabilities |
AI-Powered Review
Enable AI analysis (OpenAI, Anthropic, Ollama, Azure, or Bedrock) to automatically review findings and filter false positives. The AI examines each finding in context and adjusts the risk level — reducing noise without hiding real threats.
Open Source
SecureVector is fully open source. No cloud required. No accounts. No tracking. Run it, fork it, contribute to it.
Built for solo developers and small teams who ship AI agents without a security team or a FinOps budget. If you are building with LangChain, CrewAI, OpenClaw, or any agent framework — and you do not have someone watching your agent traffic and API spend — SecureVector is for you.
Open Source vs Cloud
| Open Source (100% Free) | Cloud (Optional) |
|---|---|
| Apache 2.0 license | Expert-curated rule library |
| Community detection rules | Multi-stage ML threat analysis |
| Custom YAML rules | Real-time cloud dashboard |
| 100% local by default, no data sharing | Team collaboration |
| Desktop app + local API | Priority support |
Cloud is optional. SecureVector runs entirely locally by default. Connect to app.securevector.io only if you want enterprise-grade threat intelligence with specialized algorithms designed to minimize false positives.
Install
Option 1: pip
Requires: Python 3.9+ (MCP requires 3.10+)
pip install securevector-ai-monitor[app]
securevector-app --web
Option 2: Binary installers
No Python required. Download and run.
| Platform | Download |
|---|---|
| Windows | SecureVector-v3.2.0-Windows-Setup.exe |
| macOS | SecureVector-3.2.0-macOS.dmg |
| Linux (AppImage) | SecureVector-3.2.0-x86_64.AppImage |
| Linux (DEB) | securevector_3.2.0_amd64.deb |
| Linux (RPM) | securevector-3.2.0-1.x86_64.rpm |
All Releases · SHA256 Checksums
Security: Only download installers from this official GitHub repository. Always verify SHA256 checksums before installation. SecureVector is not responsible for binaries obtained from third-party sources.
Other install options
| Install | Use Case | Size |
|---|---|---|
pip install securevector-ai-monitor |
SDK only — lightweight, for programmatic integration | ~18MB |
pip install securevector-ai-monitor[app] |
Full app — web UI, LLM proxy, cost tracking, tool permissions | 453 KB wheel · ~16 MB total on disk (incl. dependencies) |
pip install securevector-ai-monitor[mcp] |
MCP server — Claude Desktop, Cursor | ~38MB |
Configuration
SecureVector writes svconfig.yml to your app data directory on first run with sensible defaults.
# SecureVector Configuration
# Changes take effect on next restart.
# The config path is printed to the console when you start the app.
#
# Linux: ~/.local/share/securevector/threat-monitor/svconfig.yml
# macOS: ~/Library/Application Support/SecureVector/ThreatMonitor/svconfig.yml
# Windows: %LOCALAPPDATA%/SecureVector/ThreatMonitor/svconfig.yml
server:
# Web UI / API server listen host and port.
# Change these if port 8741 is already in use on your machine.
# If running on a remote server, set host to the server's hostname or IP address.
host: 127.0.0.1
port: 8741
security:
# Block detected threats (true) or log/warn only (false)
# Defaults to false — enable when you're confident in your rule tuning
block_mode: false
# Scan LLM responses for data leakage and PII
output_scan: true
budget:
# Daily spend limit in USD (set to null to disable)
daily_limit: 5.00
# Warn in logs/headers when spend approaches the limit
warn: true
# Block requests when the daily budget is exceeded
block: true
tools:
# Enforce tool permission rules (allow/block based on your rules)
enforcement: true # default: true
proxy:
# Proxy auto-starts with securevector-app --web when mode is set below.
integration: openclaw # or: langchain, langgraph, crewai, ollama
mode: multi-provider # or: single (add provider: below)
provider: null # required only when mode is "single"
host: 127.0.0.1 # proxy listen host — set to the server's hostname or IP if running remotely
port: 8742 # proxy listen port (default: server.port + 1)
The UI keeps this file in sync — changes in the dashboard are written back to svconfig.yml automatically.
Pointing Your Agent at the Proxy
Point any application to SecureVector's proxy instead of the provider's API.
| 🪟 Windows | 🐧 Linux / macOS |
|---|---|
|
Command Prompt (current session) set OPENAI_BASE_URL=http://localhost:8742/openai/v1 set ANTHROPIC_BASE_URL=http://localhost:8742/anthropic PowerShell (current session) $env:OPENAI_BASE_URL="http://localhost:8742/openai/v1" $env:ANTHROPIC_BASE_URL="http://localhost:8742/anthropic" PowerShell (persistent, per user) [Environment]::SetEnvironmentVariable( "OPENAI_BASE_URL", "http://localhost:8742/openai/v1", "User" ) |
Terminal (current session) export OPENAI_BASE_URL=http://localhost:8742/openai/v1 export ANTHROPIC_BASE_URL=http://localhost:8742/anthropic Persistent (add to echo 'export OPENAI_BASE_URL=http://localhost:8742/openai/v1' >> ~/.bashrc echo 'export ANTHROPIC_BASE_URL=http://localhost:8742/anthropic' >> ~/.bashrc source ~/.bashrc |
Every request is scanned for prompt injection. Every response is scanned for data leaks. Every dollar is tracked.
Supported providers (13): openai anthropic gemini ollama groq deepseek mistral xai together cohere cerebras moonshot minimax
Update
| Method | Command |
|---|---|
| PyPI | pip install --upgrade securevector-ai-monitor[app] |
| Source | git pull && pip install -e ".[app]" |
| Windows | Download latest .exe installer and run it (overwrites previous version) |
| macOS | Download latest .dmg, drag to Applications (replace existing) |
| Linux AppImage | Download latest .AppImage and replace the old file |
| Linux DEB | sudo dpkg -i securevector_<version>_amd64.deb |
| Linux RPM | sudo rpm -U securevector-<version>.x86_64.rpm |
After updating, restart SecureVector.
Documentation
- Installation Guide — Binary installers, pip, service setup
- Use Cases & Examples — LangChain, LangGraph, CrewAI, n8n, FastAPI
- MCP Server Guide — Claude Desktop, Cursor integration
- API Reference — REST API endpoints
- Security Policy — Vulnerability disclosure
Contributing
git clone https://github.com/Secure-Vector/securevector-ai-threat-monitor.git
cd securevector-ai-threat-monitor
pip install -e ".[dev]"
pytest tests/ -v
Contributing Guidelines · Code of Conduct
License
Apache License 2.0 — see LICENSE.
SecureVector is a trademark of SecureVector. See NOTICE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file securevector_ai_monitor-3.2.0.tar.gz.
File metadata
- Download URL: securevector_ai_monitor-3.2.0.tar.gz
- Upload date:
- Size: 597.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
edd712c26177e1eaf14b0e7562006a54bd095f1cc839f587b39051c39908d2e3
|
|
| MD5 |
cf78c18dbe38d65929c41aa7f38f827b
|
|
| BLAKE2b-256 |
ab61f4c07d5c2619472826ae96a8c90ad74cf132574bf0fcc830e6e7d56d064b
|
File details
Details for the file securevector_ai_monitor-3.2.0-py3-none-any.whl.
File metadata
- Download URL: securevector_ai_monitor-3.2.0-py3-none-any.whl
- Upload date:
- Size: 670.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
23673448d92828704c747fefba0a9fef827d9929389d829fd1b53630c0b9d903
|
|
| MD5 |
ee869f0d4aa14bd21ae89ef1e0877699
|
|
| BLAKE2b-256 |
d94ac85d8b60210b26db54bbd4153a9411abe86eb28ab222c29a399b69a62bac
|