Skip to main content

Real-time AI threat monitoring. Protect your apps from prompt injection, leaks, and attacks in just a few lines of code.

Project description

SecureVector SecureVector

AI Firewall for Agents — Block prompt injection, tool abuse, and data leaks before and after the LLM.

Protect your AI agents, track costs, and set budget limits — no coding required. Download the app or install with pip.


License PyPI Python Downloads Discord

Website · Getting Started · Discord · Dashboard Screenshots


▶ Watch the Demo

SecureVector Demo — AI firewall in action

Threat detection, tool permissions, and cost tracking — running locally in real time.


New in v4.0.0:

  • SIEM Forwarder — ship every threat scan and tool-call audit to Splunk, Datadog, Sentinel, Chronicle, QRadar, OTLP, any HTTPS webhook, or a local NDJSON file. OCSF 1.3.0 with MITRE ATT&CK tags, actor + device attribution, and a tool-audit hash chain your SIEM can re-verify. Metadata-only by default; raw data is opt-in per destination. Starter dashboards included for Sentinel, Splunk, Datadog, and Grafana/Loki.

v3.6.0 carries forward:

  • Tool-call audit hash chain — every row in the audit log is linked by SHA-256 (seq, prev_hash, row_hash). Tampering breaks the chain; verify locally via GET /api/tool-permissions/call-audit/integrity. Verification is a local-only operation.
  • Per-device identifier — every scan and audit row is stamped with a stable device_id. Operators running SecureVector across multiple laptops/agents can now attribute every blocked tool call, threat, and audit row to a specific machine. Derived from the OS machine UUID, SHA-256 hashed — the raw OS identifier never leaves the box.

v3.4.0 carries forward:

  • OpenClaw Plugin (ZERO latency) — native integration that runs inside the agent: input scanning, tool audit with arguments, output guard, cost tracking. No proxy needed for monitoring.
  • Block Mode for OpenClaw — optional proxy that actively blocks attacks and stops unauthorized tool calls before they reach the LLM. Only needed when you want to enforce blocking, not just monitoring.
  • Skill Scanner — static analysis for AI agent skills with optional AI-powered review
  • Tool Permissions — allow/block agent tool calls with full audit trail
  • Cost Tracking & Budget Limits — per-agent spend tracking and global daily budget

How It Works

SecureVector Architecture

SecureVector protects your AI agents at three layers:

  • Pre-install — the Skill Scanner analyzes agent skill packages for shell access, network calls, and hidden risks before you install them
  • Runtime — scans every prompt, response, and tool call for injection attacks, data leaks, and unauthorized access
  • Observe — the SIEM Forwarder ships every threat + tool-call audit to your SOC in OCSF 1.3.0 format (Splunk HEC, Datadog, Microsoft Sentinel, Google Chronicle, IBM QRadar, OTLP, generic webhook, or a local NDJSON file) so AI events correlate with your existing security signals. Metadata-only by default; raw data is opt-in per destination.

For OpenClaw, the native plugin runs inside the agent with zero latency. For other frameworks, the multi-provider proxy intercepts traffic. 100% local — events only leave the machine when you configure a SIEM destination you control.


The Problem The Fix

AI agents are powerful — and completely unprotected.

Every prompt your AI agent sends, every secret it handles, every piece of user data — goes straight to the LLM provider with nothing in between. No spend limit. No injection protection. No audit trail. You're flying blind.

SecureVector runs on your machine. For OpenClaw/ClawdBot, the native plugin handles everything — zero latency, no proxy overhead. For LangChain, CrewAI, and other frameworks, the multi-provider proxy routes traffic across OpenAI, Anthropic, Ollama, and more. It blocks threats, enforces tool permissions, and hard-stops agents that blow their budget. 100% local. No accounts.

Quick Start

Step 1 — Install or download

pip install securevector-ai-monitor[app]
securevector-app --web

Or download the app: Windows · Linux · DEB · RPM · macOS (signed binary coming soon)

Step 2 — Open the app

Open http://localhost:8741 in your browser, or double-click the installed binary.

Step 3 — Connect your agent

OpenClaw / ClawdBot (plugin, zero latency) LangChain, CrewAI, Ollama, n8n (proxy)

Observability & Monitoring — Go to Integrations → OpenClaw, click Install Plugin, restart OpenClaw. Done. No proxy, no env vars.

Observability & Monitoring — Go to Integrations, pick your framework, click Start Proxy, and set the env var shown on the page.

Block Mode (only if you want to enforce blocking) — Toggle Block Mode on the dashboard. The proxy starts automatically and blocks threats before they reach the LLM. Adds ~10–50ms latency per request. Applies to both plugin and proxy integrations.

If the app fails to launch because ports 8741/8742 are already in use, use --port <port> of your choice — the proxy starts automatically on port+1. See Configuration for proxy or web/api port settings.

Open-source. 100% local by default. No API keys required.


Screenshots

All screenshots are from a local app instance.

Tool Call History
Tool Call History — 305 calls, 158 blocked: bash rm -rf, gmail_send to attacker, use_aws_cli stopped
Agent Tool Permissions
Tool Permissions — allow or block tools by name or category
Tool Call Detail
Tool Call Detail — decision, tool, args, and timestamp for every call
Dashboard
Dashboard — threat counts, cost metrics, and tool permission status
LLM Cost Tracker
LLM Cost Tracker — per-agent spend, budgets, and token breakdown
Custom Rules
Custom Rules — create and manage detection rules by category and severity
Skill Scanner
Skill Scanner — static security analysis for AI agent skills with scan history and risk levels
Skill Policy
Skill Policy — network permissions, trusted publishers, and policy thresholds

What You Get

Threat Protection Cost Control

Scans every prompt and response for prompt injection, jailbreaks, PII leaks, and tool abuse. 50+ detection rules covering the OWASP LLM Top 10. Monitors and alerts by default with zero latency (plugin mode) — enable block mode when you're ready to hard-stop threats via proxy.

Tracks every token and dollar per agent in real time. Set daily budget limits — requests auto-stop when the cap is hit. Never wake up to a surprise bill.

Skill Scanner Full Visibility

Scan agent skills and tool packages before installing. Static analysis across 10 categories detects shell access, network calls, env var reads, and more. Optional AI review filters false positives automatically.

Live dashboard showing every LLM request, tool call, token count, and threat event. See exactly what your agents are doing.

100% Local

Runs entirely on your machine. No accounts. No cloud. No data leaves your infrastructure. Open source under Apache 2.0.


Features

Section Feature Description
Monitor Threat Monitor Live feed of every detected threat — prompt injection, jailbreaks, data leaks, tool abuse
Tool Activity Full audit log of every tool call your agents make, with args, decision, and timestamp
Cost Tracking Per-agent, per-model token spend and USD cost in real time, with request history
Scan Skill Scanner Static analysis of AI agent skills — detects shell exec, network access, env var reads, code injection, and 6 more categories
AI Review Optional LLM-powered false-positive filtering — works with OpenAI, Anthropic, Ollama, Azure, Bedrock
Scan Policy Risk scoring with per-category allow/block rules, trusted publishers, and severity thresholds
Configure Tool Permissions Allow or block specific tools by name or category — per agent, per rule. How allow / block / log_only are decided: see Tool Permissions guide
Cost Settings Set daily budget limits and choose whether to warn or hard-block at the cap
Rules Custom detection rules — auto-block or alert on threats matching your criteria

Performance: Rule-based analysis (default) adds ~10–50ms per request. Enabling optional AI analysis adds 1–3s per request depending on the model and provider — this is shown on the dashboard so you can measure it against your actual traffic.


Why SecureVector?

❌ Without SecureVector ✅ With SecureVector
Prompt injections pass straight through Detected and alerted by default (zero latency); blocked when you enable block mode
API keys and PII leak in prompts Automatically redacted
No control over what tools agents can use Fine-grained allow/block rules per tool
No audit trail of tool calls Full tool call history with decisions and reasons
No idea what agents are spending Real-time cost tracking per agent
Runaway agents burn through your API budget overnight Hard budget limits with auto-stop
Zero visibility into agent traffic Live dashboard showing everything

Works With Everything

Your AI Stack

LangChain · LlamaIndex · CrewAI · AutoGen · LangGraph · n8n · Dify · OpenClaw/ClawdBot (LLM gateway agent framework) — or any framework that makes HTTP calls to an LLM provider.

LLM Providers

OpenAI · Anthropic · Ollama · Groq · and any OpenAI-compatible API.

Run Anywhere

Environment Details
Local macOS, Linux, Windows
Cloud AWS, GCP, Azure
Containers Docker & Kubernetes
Virtual Machines EC2, Droplets, VMs
Edge / Serverless Lambda, Workers, Vercel

Agent Integrations

Agent/Framework Integration
LangChain LLM Proxy or SDK Callback
LangGraph LLM Proxy or Security Node
CrewAI LLM Proxy or SDK Callback
Any OpenAI-compatible LLM Proxy — see Integrations in UI
OpenClaw / ClawdBot (LLM gateway agent) Native plugin (zero latency) — proxy only for block mode
n8n Community Node
Claude Desktop MCP Server Guide
Any OpenAI-compatible app LLM Proxy — set OPENAI_BASE_URL to proxy
Any HTTP Client POST http://localhost:8741/analyze with {"text": "..."}

OpenClaw / ClawdBot

Native plugin with ZERO latency — runs inside the agent, no proxy needed. Install from the Integrations tab or curl -X POST http://localhost:8741/api/hooks/install. Enable block mode from the dashboard when you want to actively stop threats via proxy.

Full setup guide


What It Detects

Input Threats (User to LLM) Output Threats (LLM to User)
Prompt injection Credential leakage (API keys, tokens)
Jailbreak attempts System prompt exposure
Data exfiltration requests PII disclosure (SSN, credit cards)
Social engineering Jailbreak success indicators
SQL injection patterns Encoded malicious content
Tool result injection (MCP)
Multi-agent authority spoofing
Permission scope escalation

Full coverage: OWASP LLM Top 10

AI Agent Attack Protection (28 new rules · 72 total)

Built from real attack chains observed against production agent frameworks:

  • Tool Result Injection — injected instructions hidden inside MCP tool responses
  • Multi-Agent Authority Spoofing — impersonating trusted agents in multi-agent pipelines
  • Permission Scope Escalation — agents requesting more permissions than granted
  • MCP Tool Call Injection — malicious payloads delivered through MCP tool calls
  • Evasion techniques (22 rules) — zero-width characters, encoding tricks, roleplay framing, leetspeak, semantic inversion, emotional manipulation, and more

Device Identity

Every scan and audit row is stamped with a stable device_id so a customer running SecureVector across several laptops or agents can answer "which agent blocked this, which laptop is tampered, which machine spent what?" — not just "one of my installs did this".

Why we need it. A solo developer runs one install. A SOC team runs five to fifty. When an audit chain breaks, or a spike of blocked gmail-send calls shows up, the useful first question is which machine. Without a per-device tag, the answer is "some install" — which is useless in a fleet. device_id pins every row to a specific machine so dashboards, alerts, and compliance reviews can slice by device.

How it's generated (src/securevector/app/utils/device_id.py):

  1. Read the OS's existing stable machine identifier:
    • macOS → IOPlatformUUID via ioreg
    • Linux → /etc/machine-id (fallback /var/lib/dbus/machine-id)
    • Windows → HKLM\SOFTWARE\Microsoft\Cryptography\MachineGuid
  2. SHA-256 hash it with a namespace prefix (securevector-device-v1:<raw>) and truncate to 24 hex chars → sv-a1b2c3d4e5f6...
  3. Cache the result in ~/Library/Application Support/ThreatMonitor/.device_id (0o600) so the OS fetch happens once per install.
  4. If the OS refuses (rare: locked-down container, unusual Linux image), fall back to a random UUID cached to the same file.

Stability across reinstalls. The OS identifier outlives the app install — so uninstalling and reinstalling SecureVector on the same machine gives you the same device_id. Wiping the app data dir AND having no readable OS ID is the only combination that generates a new one. A new physical machine always gets a new ID.

Security / privacy posture — what the customer should know:

Concern Reality
Is the raw OS machine UUID transmitted? No. It's read locally, SHA-256 hashed with a namespace, and only the hash is stored. The raw value never reaches a log file or outbound event.
Can device_id be reversed to the OS UUID? SHA-256 is one-way. An attacker who already has the raw OS UUID can compute the device_id — but they already have the machine at that point, so there's no incremental leak.
Does it track users? No. It tracks machines. Multiple users on one laptop share one device_id. It's not tied to email, username, or any identity field.
Is it sent to SecureVector Cloud? Only if Cloud Connect is on AND you trigger an action that reaches the cloud (rule sync, cloud-routed /analyze). device_id goes in metadata alongside scan results. You can opt out by keeping Cloud Connect off — local-only operation never transmits it.
Is it in SIEM forwards? Yes, when the v4.0.0 SIEM forwarder is enabled — travels inside each OCSF event's unmapped block so your Splunk/Datadog can group by device.
Can the customer reset it? Yes — delete .device_id in the app data dir. Next write will regenerate from the OS identifier (so same ID reappears) OR a fresh random UUID if the OS ID is unavailable.
Does it collide across containers cloned from the same image? Potentially yes (they share /etc/machine-id). Not relevant for desktop use; mention it if you're deploying in Kubernetes.

In one sentence: device_id is a machine-identifier-per-install, derived locally, hashed before storage, never transmitted except with explicit user opt-in (Cloud Connect or SIEM Forwarder).


SIEM Forwarder

Stream every threat detection and tool-call audit into your own SIEM — Splunk HEC, Datadog, Microsoft Sentinel, Google Chronicle, IBM QRadar, an OpenTelemetry collector, a local NDJSON file, or any HTTPS endpoint that accepts JSON. Your data, your pipes.

Why this is safe to ship with zero monetization:

Feature What leaves your machine
Scan verdict scan_id, verdict, threat_score, risk_level, detected_types[], counts, durations
Tool-call audit seq, action, risk, prev_hash, row_hash (the chain witness — lets your SIEM verify integrity)
Never transmitted Prompt text, LLM output, matched patterns, reviewer reasoning, model reasoning

The allow-list is enforced at enqueue time by _assert_metadata_only(). Even if the forwarder code were tampered with, it can't add the forbidden fields back.

Supported destinations (one code path, OCSF 1.3.0 payload):

Kind Target Auth header
splunk_hec https://<host>/services/collector/event Authorization: Splunk <HEC-token>
datadog https://http-intake.logs.<site>/api/v2/logs DD-API-KEY: <key>
otlp_http https://<collector>/v1/logs optional Authorization: Bearer <token>
webhook anything that accepts JSON POST optional Authorization: Bearer <token>

Configure in Connect → SIEM Forwarder. Add SIEM destination → pick type → paste URL + token → Test → Save. Tokens are stored 0o600 in the app data dir, never in SQLite.

📊 Starter dashboards included:

Platform Template
Microsoft Sentinel docs/siem/sentinel/securevector-workbook.json
Splunk docs/siem/splunk/securevector-dashboard.xml
Datadog docs/siem/datadog/securevector-dashboard.json
Grafana (Loki) docs/siem/grafana/securevector-dashboard.json

Each carries severity counters, events-over-time by severity, actor and MITRE-ish breakdowns, and a recent-high-severity log feed. MIT-licensed, AS-IS. Full install steps + field reference in docs/siem/README.md; trademark + upstream licenses in docs/siem/NOTICE.

Starter templates — import-test in your own stack and adjust queries / facets / sourcetypes before relying on them for production detections.

Reliability:

  • Per-destination outbox with at-least-once delivery.
  • A failing Datadog destination never blocks a healthy Splunk one.
  • Per-destination circuit breaker backs off broken endpoints (1 min → 1 hour cap).
  • Rows that fail 10 times are dropped (the health view shows the consecutive-failure count).

SIEM-side integrity verification. Every forwarded tool-call audit row carries its prev_hash and row_hash. Run a nightly search in your SIEM that rebuilds the chain — if a historic row has been tampered with on the local host, the forwarded evidence still tells the true story. That's the actual tamper evidence; the local chain alone is only the low bar.


Skill Scanner

Scan AI agent skills and tool packages before you install them. SecureVector performs static analysis across 10 detection categories, assigns a risk score, and optionally runs an AI review to filter false positives.

┌──────────────────────────────────────────────────────────────────────┐
│                        Skill Scanner Flow                           │
│                                                                     │
│   ┌─────────────┐     ┌──────────────────┐     ┌────────────────┐  │
│   │  Skill Dir   │────>│  Static Analysis  │────>│  Risk Scoring  │  │
│   │  or URL      │     │  (10 categories)  │     │  LOW/MED/HIGH  │  │
│   └─────────────┘     └──────────────────┘     └───────┬────────┘  │
│                                                         │           │
│                              ┌───────────────────────── │           │
│                              v                          v           │
│                     ┌─────────────────┐     ┌────────────────────┐  │
│                     │  AI Review      │     │  Policy Engine     │  │
│                     │  (optional LLM) │     │  allow/block rules │  │
│                     │  FP filtering   │     │  trusted publishers│  │
│                     └────────┬────────┘     └─────────┬──────────┘  │
│                              │                        │             │
│                              v                        v             │
│                     ┌──────────────────────────────────────┐        │
│                     │  Verdict: PASS / WARN / BLOCK        │        │
│                     │  + detailed findings per category     │        │
│                     └──────────────────────────────────────┘        │
└──────────────────────────────────────────────────────────────────────┘

Detection Categories

Category What It Finds
shell_exec Subprocess calls, system commands
network_domain HTTP requests, socket connections, DNS lookups
env_var_read Access to environment variables (API keys, secrets)
code_exec eval, dynamic code generation
dynamic_import Runtime module loading
file_write Writing to disk outside expected paths
base64_literal Obfuscated payloads in base64 strings
compiled_code .pyc, .so, .dll binaries embedded in the skill
symlink_escape Symlinks pointing outside the skill directory
missing_manifest No permissions.yml declaring required capabilities

AI-Powered Review

Enable AI analysis (OpenAI, Anthropic, Ollama, Azure, or Bedrock) to automatically review findings and filter false positives. The AI examines each finding in context and adjusts the risk level — reducing noise without hiding real threats.


Open Source

SecureVector is fully open source. No cloud required. No accounts. No tracking. Run it, fork it, contribute to it.

Built for solo developers and small teams who ship AI agents without a security team or a FinOps budget. If you are building with LangChain, CrewAI, OpenClaw, or any agent framework — and you do not have someone watching your agent traffic and API spend — SecureVector is for you.

Install

Option 1: pip

Requires: Python 3.9+ (MCP requires 3.10+)

pip install securevector-ai-monitor[app]
securevector-app --web

Option 2: Binary installers

No Python required. Download and run.

Platform Download
Windows SecureVector-v4.0.0-Windows-Setup.exe
macOS SecureVector-4.0.0-macOS.dmg (signed binary coming soon)
Linux (AppImage) SecureVector-4.0.0-x86_64.AppImage
Linux (DEB) securevector_4.0.0_amd64.deb
Linux (RPM) securevector-4.0.0-1.x86_64.rpm

All Releases · SHA256 Checksums

Security: Only download installers from this official GitHub repository. Always verify SHA256 checksums before installation. SecureVector is not responsible for binaries obtained from third-party sources.

macOS binary note: If you downloaded a previous .dmg release and macOS blocks it, we recommend installing via pip instead: pip install securevector-ai-monitor[app]. A signed macOS binary is coming soon. If you must use the .dmg, only download from this official GitHub repository, verify the SHA256 checksum, then run xattr -cr /Applications/SecureVector.app in Terminal.

Other install options

Install Use Case Size
pip install securevector-ai-monitor SDK only — lightweight, for programmatic integration ~18MB
pip install securevector-ai-monitor[app] Full app — web UI, LLM proxy, cost tracking, tool permissions 453 KB wheel · ~16 MB total on disk (incl. dependencies)
pip install securevector-ai-monitor[mcp] MCP server — Claude Desktop, Cursor ~38MB

Configuration

SecureVector writes svconfig.yml to your app data directory on first run with sensible defaults.

# SecureVector Configuration
# Changes take effect on next restart.
# The config path is printed to the console when you start the app.
#
# Linux:   ~/.local/share/securevector/threat-monitor/svconfig.yml
# macOS:   ~/Library/Application Support/SecureVector/ThreatMonitor/svconfig.yml
# Windows: %LOCALAPPDATA%/SecureVector/ThreatMonitor/svconfig.yml

server:
  # Web UI / API server listen host and port.
  # Change these if port 8741 is already in use on your machine.
  # If running on a remote server, set host to the server's hostname or IP address.
  host: 127.0.0.1
  port: 8741

security:
  # Block detected threats (true) or log/warn only (false)
  # Defaults to false — enable when you're confident in your rule tuning
  block_mode: false
  # Scan LLM responses for data leakage and PII
  output_scan: true

budget:
  # Daily spend limit in USD (set to null to disable)
  daily_limit: 5.00
  # Warn in logs/headers when spend approaches the limit
  warn: true
  # Block requests when the daily budget is exceeded
  block: true

tools:
  # Enforce tool permission rules (allow/block based on your rules)
  enforcement: true           # default: true

proxy:
  # OpenClaw/ClawdBot: proxy only starts when block_mode is enabled (above).
  #   Plugin-only mode handles monitoring with zero latency — no proxy needed.
  # LangChain/CrewAI/Ollama/other: proxy auto-starts as the only integration path.
  integration: openclaw       # or: langchain, langgraph, crewai, ollama
  mode: multi-provider        # or: single (add provider: below)
  provider: null              # required only when mode is "single"
  host: 127.0.0.1             # proxy listen host — set to the server's hostname or IP if running remotely
  port: 8742                  # proxy listen port (default: server.port + 1)

The UI keeps this file in sync — changes in the dashboard are written back to svconfig.yml automatically.

Pointing Your Agent at the Proxy

For LangChain, CrewAI, Ollama, and other non-OpenClaw frameworks, point your application to SecureVector's proxy instead of the provider's API. OpenClaw/ClawdBot users only need this when block mode is enabled.

🪟 Windows 🐧 Linux / macOS

Command Prompt (current session)

set OPENAI_BASE_URL=http://localhost:8742/openai/v1
set ANTHROPIC_BASE_URL=http://localhost:8742/anthropic

PowerShell (current session)

$env:OPENAI_BASE_URL="http://localhost:8742/openai/v1"
$env:ANTHROPIC_BASE_URL="http://localhost:8742/anthropic"

PowerShell (persistent, per user)

[Environment]::SetEnvironmentVariable(
  "OPENAI_BASE_URL",
  "http://localhost:8742/openai/v1",
  "User"
)

Terminal (current session)

export OPENAI_BASE_URL=http://localhost:8742/openai/v1
export ANTHROPIC_BASE_URL=http://localhost:8742/anthropic

Persistent (add to ~/.bashrc or ~/.zshrc)

echo 'export OPENAI_BASE_URL=http://localhost:8742/openai/v1' >> ~/.bashrc
echo 'export ANTHROPIC_BASE_URL=http://localhost:8742/anthropic' >> ~/.bashrc
source ~/.bashrc

Every request is scanned for prompt injection. Every response is scanned for data leaks. Every dollar is tracked — whether via native plugin (OpenClaw) or proxy (all other frameworks).

Supported providers (13): openai anthropic gemini ollama groq deepseek mistral xai together cohere cerebras moonshot minimax


Update

Method Command
PyPI pip install --upgrade securevector-ai-monitor[app]
Source git pull && pip install -e ".[app]"
Windows Download latest .exe installer and run it (overwrites previous version)
macOS Download latest .dmg, drag to Applications (signed binary coming soon)
Linux AppImage Download latest .AppImage and replace the old file
Linux DEB sudo dpkg -i securevector_<version>_amd64.deb
Linux RPM sudo rpm -U securevector-<version>.x86_64.rpm

After updating, restart SecureVector.


Documentation


Contributing

git clone https://github.com/Secure-Vector/securevector-ai-threat-monitor.git
cd securevector-ai-threat-monitor
pip install -e ".[dev]"
pytest tests/ -v

Contributing Guidelines · Code of Conduct

License

Apache License 2.0 — see LICENSE.

The starter SIEM dashboard templates under docs/siem/ (Splunk XML, Sentinel workbook, Datadog + Grafana JSON) are MIT-licensed — see docs/siem/LICENSE and docs/siem/NOTICE for trademark disclaimers.

SecureVector is a trademark of SecureVector. See NOTICE.


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

securevector_ai_monitor-4.0.0.tar.gz (781.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

securevector_ai_monitor-4.0.0-py3-none-any.whl (852.1 kB view details)

Uploaded Python 3

File details

Details for the file securevector_ai_monitor-4.0.0.tar.gz.

File metadata

  • Download URL: securevector_ai_monitor-4.0.0.tar.gz
  • Upload date:
  • Size: 781.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for securevector_ai_monitor-4.0.0.tar.gz
Algorithm Hash digest
SHA256 c5dd17191964491fdfb0f8efabb50116abed2f071f51bfebdcca8c50432b5650
MD5 8c1767dc11a3833727f70f4ce7216ece
BLAKE2b-256 96606583440abdf9f9590c52ab7b73415b9c32f3b090b1b1837d50e20cbfafb3

See more details on using hashes here.

File details

Details for the file securevector_ai_monitor-4.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for securevector_ai_monitor-4.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4ffe87fa2aaf3f743cbe8bcba14749f85426b6fb86bf6948f293ec0efed183b2
MD5 508ef5b5b355527db17fc7f3e4d51535
BLAKE2b-256 b42c41b3d005e2a121590ba7db9766dec043ee6db947c382e40aeb35f92dc4ab

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page