Skip to main content

Real-time AI threat monitoring. Protect your apps from prompt injection, leaks, and attacks in just a few lines of code.

Project description

SecureVector SecureVector

AI Firewall for Agents — Block prompt injection, tool abuse, and data leaks before and after the LLM.

Protect your AI agents, track costs, and set budget limits — no coding required. Download the app or install with pip.


License PyPI Python Downloads Discord

Website · Getting Started · Discord · Dashboard Screenshots

🆕 New in v3.0.1:

  • Tool Permissions — allow/block agent tool calls
  • Cost Tracking & Budget Limits — per-agent spend tracking and global daily budget
  • 28 new threat detection rules

How It Works

SecureVector Architecture

SecureVector sits between your AI agent and the LLM provider, scanning every request and response for security threats, controlling tool permissions, and tracking spend in real time. Runs entirely on your machine — nothing leaves your infrastructure.


The Problem The Fix

AI agents are powerful — and completely unprotected.

Every prompt your AI agent sends, every secret it handles, every piece of user data — goes straight to the LLM provider with nothing in between. No spend limit. No injection protection. No audit trail. You're flying blind.

SecureVector runs on your machine, between your AI agents and LLM providers. It starts with a multi-provider proxy mode for routing across OpenAI, Anthropic, Ollama, and more — all through a single endpoint. It blocks threats, enforces tool permissions, and hard-stops agents that blow their budget. 100% local. No accounts.

Quick Start

Step 1 — Install or download

pip install securevector-ai-monitor[app]
securevector-app --web

Or download the app: Windows · macOS · Linux · DEB · RPM

Step 2 — Open the app

Open http://localhost:8741 in your browser, or double-click the installed binary.

Step 3 — Connect your agent

Go to the Integrations tab for step-by-step setup guides — OpenClaw, LangChain, CrewAI, LangGraph, n8n, Ollama, OpenAI, Anthropic, and more.

If the app fails to launch because ports 8741/8742 are already in use, use --port <port> of your choice — the proxy starts automatically on port+1. See Configuration for proxy or web/api port settings.

Open-source. 100% local by default. No API keys required.


Screenshots

All screenshots are from a local app instance.

Tool Call History
Tool Call History — 305 calls, 158 blocked: bash rm -rf, gmail_send to attacker, use_aws_cli stopped
Agent Tool Permissions
Tool Permissions — allow or block tools by name or category
Tool Call Detail
Tool Call Detail — decision, tool, args, and timestamp for every call
Dashboard
Dashboard — threat counts, cost metrics, and tool permission status
LLM Cost Tracker
LLM Cost Tracker — per-agent spend, budgets, and token breakdown
Custom Rules
Custom Rules — create and manage detection rules by category and severity

What You Get

Threat Protection Cost Control

Scans every prompt and response for prompt injection, jailbreaks, PII leaks, and tool abuse. 50+ detection rules covering the OWASP LLM Top 10. Detects and logs threats by default — enable block mode when you're ready to hard-stop them.

Tracks every token and dollar per agent in real time. Set daily budget limits — requests auto-stop when the cap is hit. Never wake up to a surprise bill.

Full Visibility 100% Local

Live dashboard showing every LLM request, tool call, token count, and threat event. See exactly what your agents are doing.

Runs entirely on your machine. No accounts. No cloud. No data leaves your infrastructure. Open source under Apache 2.0.


Features

Section Feature Description
Monitor Threat Monitor Live feed of every detected threat — prompt injection, jailbreaks, data leaks, tool abuse
Tool Activity Full audit log of every tool call your agents make, with args, decision, and timestamp
Cost Tracking Per-agent, per-model token spend and USD cost in real time, with request history
Configure Tool Permissions Allow or block specific tools by name or category — per agent, per rule
Cost Settings Set daily budget limits and choose whether to warn or hard-block at the cap
Rules Custom detection rules — auto-block or alert on threats matching your criteria

Performance: Rule-based analysis (default) adds ~10–50ms per request. Enabling optional AI analysis adds 1–3s per request depending on the model and provider — this is shown on the dashboard so you can measure it against your actual traffic.


Why SecureVector?

❌ Without SecureVector ✅ With SecureVector
Prompt injections pass straight through Detected and logged by default; blocked when you enable block mode
API keys and PII leak in prompts Automatically redacted
No control over what tools agents can use Fine-grained allow/block rules per tool
No audit trail of tool calls Full tool call history with decisions and reasons
No idea what agents are spending Real-time cost tracking per agent
Runaway agents burn through your API budget overnight Hard budget limits with auto-stop
Zero visibility into agent traffic Live dashboard showing everything

Works With Everything

Your AI Stack

LangChain · LlamaIndex · CrewAI · AutoGen · LangGraph · n8n · Dify · OpenClaw/ClawdBot (LLM gateway agent framework) — or any framework that makes HTTP calls to an LLM provider.

LLM Providers

OpenAI · Anthropic · Ollama · Groq · and any OpenAI-compatible API.

Run Anywhere

Environment Details
Local macOS, Linux, Windows
Cloud AWS, GCP, Azure
Containers Docker & Kubernetes
Virtual Machines EC2, Droplets, VMs
Edge / Serverless Lambda, Workers, Vercel

Agent Integrations

Agent/Framework Integration
LangChain LLM Proxy or SDK Callback
LangGraph LLM Proxy or Security Node
CrewAI LLM Proxy or SDK Callback
Any OpenAI-compatible LLM Proxy — see Integrations in UI
OpenClaw / ClawdBot (LLM gateway agent) LLM Proxy — see Integrations in UI
n8n Community Node
Claude Desktop MCP Server Guide
Any OpenAI-compatible app LLM Proxy — set OPENAI_BASE_URL to proxy
Any HTTP Client POST http://localhost:8741/analyze with {"text": "..."}

What It Detects

Input Threats (User to LLM) Output Threats (LLM to User)
Prompt injection Credential leakage (API keys, tokens)
Jailbreak attempts System prompt exposure
Data exfiltration requests PII disclosure (SSN, credit cards)
Social engineering Jailbreak success indicators
SQL injection patterns Encoded malicious content
Tool result injection (MCP)
Multi-agent authority spoofing
Permission scope escalation

Full coverage: OWASP LLM Top 10

AI Agent Attack Protection (28 new rules · 72 total)

Built from real attack chains observed against production agent frameworks:

  • Tool Result Injection — injected instructions hidden inside MCP tool responses
  • Multi-Agent Authority Spoofing — impersonating trusted agents in multi-agent pipelines
  • Permission Scope Escalation — agents requesting more permissions than granted
  • MCP Tool Call Injection — malicious payloads delivered through MCP tool calls
  • Evasion techniques (22 rules) — zero-width characters, encoding tricks, roleplay framing, leetspeak, semantic inversion, emotional manipulation, and more

Open Source

SecureVector is fully open source. No cloud required. No accounts. No tracking. Run it, fork it, contribute to it.

Built for solo developers and small teams who ship AI agents without a security team or a FinOps budget. If you are building with LangChain, CrewAI, OpenClaw, or any agent framework — and you do not have someone watching your agent traffic and API spend — SecureVector is for you.

Open Source vs Cloud

Open Source (100% Free) Cloud (Optional)
Apache 2.0 license Expert-curated rule library
Community detection rules Multi-stage ML threat analysis
Custom YAML rules Real-time cloud dashboard
100% local by default, no data sharing Team collaboration
Desktop app + local API Priority support

Cloud is optional. SecureVector runs entirely locally by default. Connect to app.securevector.io only if you want enterprise-grade threat intelligence with specialized algorithms designed to minimize false positives.

Try Free


Install

Option 1: pip

Requires: Python 3.9+ (MCP requires 3.10+)

pip install securevector-ai-monitor[app]
securevector-app --web

Option 2: Binary installers

No Python required. Download and run.

Platform Download
Windows SecureVector-v3.0.1-Windows-Setup.exe
macOS SecureVector-3.0.1-macOS.dmg
Linux (AppImage) SecureVector-3.0.1-x86_64.AppImage
Linux (DEB) securevector_3.0.1_amd64.deb
Linux (RPM) securevector-3.0.1-1.x86_64.rpm

All Releases · SHA256 Checksums

Security: Only download installers from this official GitHub repository. Always verify SHA256 checksums before installation. SecureVector is not responsible for binaries obtained from third-party sources.

Other install options

Install Use Case Size
pip install securevector-ai-monitor SDK only — lightweight, for programmatic integration ~18MB
pip install securevector-ai-monitor[app] Full app — web UI, LLM proxy, cost tracking, tool permissions 453 KB wheel · ~16 MB total on disk (incl. dependencies)
pip install securevector-ai-monitor[mcp] MCP server — Claude Desktop, Cursor ~38MB

Configuration

SecureVector writes svconfig.yml to your app data directory on first run with sensible defaults.

# SecureVector Configuration
# Changes take effect on next restart.
# The config path is printed to the console when you start the app.
#
# Linux:   ~/.local/share/securevector/threat-monitor/svconfig.yml
# macOS:   ~/Library/Application Support/SecureVector/ThreatMonitor/svconfig.yml
# Windows: %LOCALAPPDATA%/SecureVector/ThreatMonitor/svconfig.yml

server:
  # Web UI / API server listen host and port.
  # Change these if port 8741 is already in use on your machine.
  # If running on a remote server, set host to the server's hostname or IP address.
  host: 127.0.0.1
  port: 8741

security:
  # Block detected threats (true) or log/warn only (false)
  # Defaults to false — enable when you're confident in your rule tuning
  block_mode: false
  # Scan LLM responses for data leakage and PII
  output_scan: true

budget:
  # Daily spend limit in USD (set to null to disable)
  daily_limit: 5.00
  # Warn in logs/headers when spend approaches the limit
  warn: true
  # Block requests when the daily budget is exceeded
  block: true

tools:
  # Enforce tool permission rules (allow/block based on your rules)
  enforcement: true           # default: true

proxy:
  # Proxy auto-starts with securevector-app --web when mode is set below.
  integration: openclaw       # or: langchain, langgraph, crewai, ollama
  mode: multi-provider        # or: single (add provider: below)
  provider: null              # required only when mode is "single"
  host: 127.0.0.1             # proxy listen host — set to the server's hostname or IP if running remotely
  port: 8742                  # proxy listen port (default: server.port + 1)

The UI keeps this file in sync — changes in the dashboard are written back to svconfig.yml automatically.

Pointing Your Agent at the Proxy

Point any application to SecureVector's proxy instead of the provider's API.

🪟 Windows 🐧 Linux / macOS

Command Prompt (current session)

set OPENAI_BASE_URL=http://localhost:8742/openai/v1
set ANTHROPIC_BASE_URL=http://localhost:8742/anthropic

PowerShell (current session)

$env:OPENAI_BASE_URL="http://localhost:8742/openai/v1"
$env:ANTHROPIC_BASE_URL="http://localhost:8742/anthropic"

PowerShell (persistent, per user)

[Environment]::SetEnvironmentVariable(
  "OPENAI_BASE_URL",
  "http://localhost:8742/openai/v1",
  "User"
)

Terminal (current session)

export OPENAI_BASE_URL=http://localhost:8742/openai/v1
export ANTHROPIC_BASE_URL=http://localhost:8742/anthropic

Persistent (add to ~/.bashrc or ~/.zshrc)

echo 'export OPENAI_BASE_URL=http://localhost:8742/openai/v1' >> ~/.bashrc
echo 'export ANTHROPIC_BASE_URL=http://localhost:8742/anthropic' >> ~/.bashrc
source ~/.bashrc

Every request is scanned for prompt injection. Every response is scanned for data leaks. Every dollar is tracked.

Supported providers (13): openai anthropic gemini ollama groq deepseek mistral xai together cohere cerebras moonshot minimax


Update

Method Command
PyPI pip install --upgrade securevector-ai-monitor[app]
Source git pull && pip install -e ".[app]"
Windows Download latest .exe installer and run it (overwrites previous version)
macOS Download latest .dmg, drag to Applications (replace existing)
Linux AppImage Download latest .AppImage and replace the old file
Linux DEB sudo dpkg -i securevector_<version>_amd64.deb
Linux RPM sudo rpm -U securevector-<version>.x86_64.rpm

After updating, restart SecureVector.


Documentation


Contributing

git clone https://github.com/Secure-Vector/securevector-ai-threat-monitor.git
cd securevector-ai-threat-monitor
pip install -e ".[dev]"
pytest tests/ -v

Contributing Guidelines · Code of Conduct

License

Apache License 2.0 — see LICENSE.

SecureVector is a trademark of SecureVector. See NOTICE.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

securevector_ai_monitor-3.0.1.tar.gz (518.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

securevector_ai_monitor-3.0.1-py3-none-any.whl (586.3 kB view details)

Uploaded Python 3

File details

Details for the file securevector_ai_monitor-3.0.1.tar.gz.

File metadata

  • Download URL: securevector_ai_monitor-3.0.1.tar.gz
  • Upload date:
  • Size: 518.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for securevector_ai_monitor-3.0.1.tar.gz
Algorithm Hash digest
SHA256 2a831cc5f3da83df1ece6dd3cbfd1fd6004481b45a6a6c76c8674e3db270ba4a
MD5 e23d12f79ffca26c5126d257e022c289
BLAKE2b-256 a9147d3f9044544dd4898443cae40d6b4b8b90eb15205cea983d302dd53fffd0

See more details on using hashes here.

File details

Details for the file securevector_ai_monitor-3.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for securevector_ai_monitor-3.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 ef187a5312474b0596e960f288a5883f8bf6b75cdb394bfe300c0206effc4c14
MD5 51e1d9b39a32c2d2625a2e187ea2bf94
BLAKE2b-256 6e2c3e6cb475ec6bda71f5b9871c3eaf4601071c07b73cab9baca9537dc7fa16

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page