Skip to main content

Stop your AI agents from breaking things. Intercepts every action and blocks the dangerous ones before they execute.

Project description

Vectimus

Cedar policies for every AI agent action. Coding tools and agentic frameworks. Every evaluation under 5ms. Zero config.

PyPI License CI Python

Claude Code session with Vectimus blocking rm -rf, terraform destroy and force push while allowing safe commands

Install

pipx install vectimus
vectimus init

That's it. Cedar policies evaluate every tool call — whether from a coding agent in your terminal or a framework agent in production. Dangerous commands, secret access, infrastructure changes and supply chain attacks blocked before execution.

Why this exists

AI coding agents and agentic frameworks run shell commands, write files, install packages and call APIs. Without a policy layer, nothing stands between a prompt injection and rm -rf /.

These are not hypothetical risks:

  • Clinejection (Feb 2026) — A prompt injection in a GitHub issue title caused an AI agent to publish backdoored npm packages. 4,000 developer machines compromised in 8 hours.
  • Terraform destroy incident (Feb 2026) — An AI agent unpacked old Terraform configs and ran terraform destroy, wiping a production VPC, RDS database and ECS cluster.
  • IDEsaster (Dec 2025) — Researchers found 30+ vulnerabilities across Cursor, Windsurf and GitHub Copilot. 24 CVEs assigned.

Vectimus is a defense-in-depth layer. Whatever permission setup your team uses, Vectimus adds deterministic policy evaluation underneath. Same input, same decision, every time.

What it catches

Every policy references the real-world incident that motivated it. No "best practice" filler.

Pack What it blocks Example
Destructive Ops rm -rf, terraform destroy, docker system prune Production wipe prevention
Secrets Credential file access, env variable exposure .env, AWS keys, SSH keys
Supply Chain npm publish, pip install from URLs, registry tampering Clinejection-class attacks
Infrastructure terraform apply, kubectl delete, cloud CLI mutations Unreviewed infra changes
Code Execution eval(), exec(), unsafe interpreter invocations Code injection via agents
Data Exfiltration curl to external hosts, file upload, data piping Credential theft, data leakage
File Integrity Writes to .vectimus/, sensitive config paths Governance tampering
Database Direct database CLI access, credential harvesting Unauthorized data access
Git Safety git push --force, history rewriting, credential commits Repository damage
MCP Safety Unapproved MCP servers, dangerous tool parameters MCP server supply chain
Agent Governance Unchecked agent spawning, goal hijacking, rogue agents Multi-agent control

11 packs. Browse all policies →

Maps to OWASP Agentic Top 10 (all 10 categories), SOC 2, NIST AI RMF, NIST CSF 2.0, ISO 27001 and EU AI Act. Full compliance mappings →

Example policy

@id("vectimus-supchain-001")
@description("Block npm publish to prevent supply-chain attacks")
@incident("Clinejection: malicious npm packages published by compromised AI agent, Feb 2026")
@controls("SLSA-L2, SOC2-CC6.8, NIST-AI-MG-3.2, EU-AI-15")
forbid (
    principal,
    action == Vectimus::Action::"package_operation",
    resource
) when {
    context.command like "*npm publish*"
};

Every rule has an @incident annotation linking it to the attack it prevents and @controls mapping it to compliance frameworks. Governance rules backed by real attacks are compelling. Rules that exist "because best practice" are not.

Policies that stay current

Vectimus checks for policy updates in the background every 24 hours. New rules ship when new threats appear.

vectimus policy update    # Pull latest now
vectimus policy status    # Check version and sync info

Behind the scenes, Sentinel runs a three-agent pipeline daily:

  • Threat Hunter scans the agentic AI security landscape for new incidents -- MCP vulnerabilities, tool poisoning, agent exploitation -- and classifies them against OWASP, NIST and CIS frameworks
  • Security Engineer drafts Cedar policies and replays the incident in a sandbox to prove the policy catches the attack before opening a PR
  • Threat Analyst writes the advisory and incident analysis for the public threat feed

A human reviews every PR. The policy ships. Your install picks it up automatically.

The entire pipeline is governed by Vectimus itself. The agents that write governance rules operate under the same governance system.

Live threat dashboard → | Incident blog posts →

Works with

Coding tools

Claude Code Cursor GitHub Copilot Gemini CLI

Agent frameworks

LangGraph Google ADK Claude Agent SDK

Same Cedar policies govern both. One install.

LangGraph / LangChain integration

Agent middleware

from vectimus.integrations.langgraph import VectimusMiddleware

middleware = VectimusMiddleware(
    policy_dir="./policies",   # Optional, defaults to bundled policies
    observe_mode=False,        # Optional, defaults to False
)

agent = create_agent(
    model="openai:gpt-4.1",
    tools=my_tools,
    middleware=[middleware],
)

MCP interceptor

from vectimus.integrations.langgraph import create_interceptor

interceptor = create_interceptor(
    policy_dir="./policies",
    observe_mode=False,
)

client = MultiServerMCPClient(
    {...},
    tool_interceptors=[interceptor],
)

Both support observe mode for trialling without enforcement.

Google ADK integration

Runner plugin (recommended)

from vectimus.integrations.adk import VectimusADKPlugin

plugin = VectimusADKPlugin(
    policy_dir="./policies",   # Optional, defaults to bundled policies
    observe_mode=False,        # Optional, defaults to False
)

runner = Runner(
    agent=my_agent,
    app_name="my-app",
    session_service=session_service,
    plugins=[plugin],
)

Per-agent callback

from vectimus.integrations.adk import create_before_tool_callback

callback = create_before_tool_callback(
    policy_dir="./policies",
    observe_mode=False,
)

agent = LlmAgent(
    name="MyAgent",
    model="gemini-2.0-flash",
    before_tool_callback=callback,
)

How it works

┌─────────────┐     ┌───────────────┐     ┌──────────────┐     ┌──────────┐
│  AI Agent   │────▶│   Vectimus    │────▶│ Cedar Policy │────▶│ allow /  │
│ (tool call) │     │  Normaliser   │     │   Engine     │     │ deny /   │
│             │◀────│               │◀────│              │◀────│ escalate │
└─────────────┘     └───────────────┘     └──────────────┘     └──────────┘
                           │
                           ▼
                    ┌──────────────┐
                    │  Audit Log   │
                    │  (JSONL)     │
                    └──────────────┘
  • Normaliser translates tool-specific payloads (Claude Code, Cursor, Copilot, Gemini CLI) into a unified Cedar request format
  • Cedar Engine evaluates all loaded policies deterministically. No LLM in the loop. Same input, same decision.
  • Audit Log records every decision with full context for compliance evidence and incident investigation

Evaluation is entirely local. Zero telemetry. The only network call is a background policy update check every 24 hours (disable with vectimus policy auto-update off). Cedar is the same policy language used by AWS AgentCore Policy and Amazon Verified Permissions.

MCP server governance

Vectimus blocks all MCP tool calls by default. During vectimus init it reads your existing tool configs and offers to approve the MCP servers you already use:

MCP servers detected:
  Claude Code:  posthog, slack
  Cursor:       github

Allow all 3 servers? [y/N]:

Manage the allowlist at any time:

vectimus mcp allow github
vectimus mcp allow slack
vectimus mcp list

Approved servers still go through input inspection rules that check for credential paths, CI/CD tampering and dangerous commands in tool parameters.

Observe mode

Trial Vectimus without blocking anything. Observe mode logs all decisions but always allows actions.

vectimus observe on       # Log only, no enforcement
vectimus observe off      # Switch to enforcement
vectimus observe status   # Show current mode

Review the audit log at ~/.vectimus/logs/ to understand what your policies would block. Deploy in observe mode, review with your security team, then switch to enforcement.

Per-project overrides

vectimus rule disable vectimus-destruct-003              # This project only
vectimus rule disable vectimus-destruct-003 --global     # All projects
vectimus rule overrides                                  # View overrides

Overrides live in .vectimus/config.toml in the project root. The .vectimus/ directory is protected by policy -- agents cannot modify it.

Server mode

For team-wide enforcement, run Vectimus as a shared server:

pip install vectimus[server]
vectimus serve

All agent hooks forward to the server for centralised policy evaluation, audit logging and identity-aware decisions. Server documentation →

Uninstall

vectimus remove

Strips Vectimus hooks from all detected tools in the current project. Preserves non-Vectimus hooks. Config and audit logs at ~/.vectimus/ are not touched.

Documentation

Full docs at vectimus.com/docs:

Configuration reference

Create .vectimus/config.toml in your project root:

[policies]
dir = "./policies"

[server]
host = "0.0.0.0"
port = 8420

[logging]
dir = "~/.vectimus/logs"

[mcp]
allowed_servers = ["github", "slack"]

[identity]
resolver = "git"

Or use environment variables:

Variable Purpose
VECTIMUS_POLICY_DIR Policy directory path
VECTIMUS_SERVER_URL Server URL for hook forwarding
VECTIMUS_LOG_DIR Audit log directory
VECTIMUS_OBSERVE Set to true for observe mode
VECTIMUS_MCP_ALLOWED Comma-separated approved MCP servers
VECTIMUS_API_KEY API key for server authentication

Contributing

Contributions welcome. Please open an issue before submitting large changes.

  1. Fork and clone the repository
  2. Install dev dependencies: uv pip install -e ".[dev]"
  3. Run tests: pytest
  4. Run linting: ruff check src/ tests/

License

Apache 2.0. See LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vectimus-0.18.0.tar.gz (78.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vectimus-0.18.0-py3-none-any.whl (110.7 kB view details)

Uploaded Python 3

File details

Details for the file vectimus-0.18.0.tar.gz.

File metadata

  • Download URL: vectimus-0.18.0.tar.gz
  • Upload date:
  • Size: 78.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for vectimus-0.18.0.tar.gz
Algorithm Hash digest
SHA256 1cbd3806fccea98e29e69b701590671b8235eb4ba26dc6f0bc931cb87a4da77b
MD5 3e04d42b8fef473b4c1bd740bff77fee
BLAKE2b-256 54c6cd5a9dad13beedd9b20733e9bed068ad2a3d59815dc8af406021a8a5fe5a

See more details on using hashes here.

Provenance

The following attestation bundles were made for vectimus-0.18.0.tar.gz:

Publisher: release.yml on vectimus/vectimus

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file vectimus-0.18.0-py3-none-any.whl.

File metadata

  • Download URL: vectimus-0.18.0-py3-none-any.whl
  • Upload date:
  • Size: 110.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for vectimus-0.18.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b38cb663ed810eead598f512a27e197f199c29d148e3a86c998393122666e985
MD5 48afbd7ef4ba4823b38e00e5278ea51f
BLAKE2b-256 232feeb12035fb51995db7f1ef3b34d78035119ddd8f9629322dc04a97f6045b

See more details on using hashes here.

Provenance

The following attestation bundles were made for vectimus-0.18.0-py3-none-any.whl:

Publisher: release.yml on vectimus/vectimus

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page