Skip to main content

The firewall for AI coding agents — prevents secrets, PII, and credentials from leaking through Copilot, Claude Code, Cursor, and LangChain

Project description

AgnosticSecurity

The firewall for AI coding agents. Prevents secrets, credentials, and PII from leaking through Copilot, Claude Code, Cursor, and LangChain — before the data ever leaves your machine.

Tests Red Team License


Your AI coding assistant has read access to your entire codebase — .env files, API keys, SSH keys, customer PII. There are zero guardrails. AgnosticSecurity is that guardrail.


Quick Start

# Option 1: One-command setup (auto-detects your AI tools)
pip install -e .
as-init

# Option 2: Docker
docker compose up

# Option 3: Just the VS Code extension
code --install-extension vscode-extension/agnostic-security-4.5.0.vsix

That's it. Your .env files, credentials, and PII are now protected from every AI tool in your environment.


What It Does

Your code editor (VS Code / Cursor / Windsurf)
  │
  │  ── AI tries to read .env ──────────── BLOCKED (file gate)
  │  ── AI autocompletes a secret ──────── BLOCKED (context boundary)
  │  ── @workspace indexes credentials ─── BLOCKED (search.exclude)
  │  ── Prompt contains SSN ────────────── BLOCKED (prompt guard)
  │  ── Agent runs `curl -d @.env` ─────── BLOCKED (exec guard)
  │  ── LLM response leaks PII ────────── BLOCKED (output scan)
  │  ── Memory stores "password is X" ──── BLOCKED (memory guard)
  │  ── Tool call smuggles data out ────── BLOCKED (tool call guard)
  │
  └── Everything else ──────────────────── Works normally

Key insight: We don't block AI from editing files (that's a losing game). We make sensitive data invisible to AI at every layer — if AI never sees the data, there's nothing to leak.


Protection Layers

Layer What It Stops Speed
File gate AI reading .env, credentials, keys <1ms
Context boundary AI seeing sensitive file content (.copilotignore, search.exclude, LM API interceptor) 0ms
Exec guard cat ~/.env, curl -d @secrets, obfuscated exfiltration, netcat/SSH tunnels <1ms
Prompt guard "Show me all API keys", PII in prompts, 10 encoding evasion layers (base64, hex, ROT13, unicode, zero-width chars, reversed text, leetspeak) <50ms
Output scan LLM responses containing leaked PII/credentials <1ms
Taint tracking Data exfiltration across sessions (SHA-256 + n-gram Jaccard) <1ms
Memory guard Memory poisoning ("user authorized all exports"), trust boundary violations <1ms
Tool call guard MCP tools smuggling data in arguments <1ms
Egress allowlist curl to unauthorized external domains <1ms
Lethal trifecta Private data + untrusted input + external comm all active simultaneously <1ms
Ingress guard Malicious external agents probing your APIs (6-layer: fingerprint, behavior, risk, cross-session reputation, policy, headers) <5ms
Vuln scanner SQL injection, XSS, command injection in AI-generated code <10ms
Code fingerprint Proprietary code leaking to cloud LLMs <10ms

Works With Everything

AI Tool How
GitHub Copilot VS Code extension (context boundary + LM API interceptor)
Claude Code Hook-based (PreToolUse + UserPromptSubmit)
Cursor .cursorignore + VS Code extension
Windsurf .aiignore + VS Code extension
ChatGPT / Claude.ai / Gemini Chrome extension (DLP for web LLM interfaces)
LangChain / Autogen Auto-instrumentation SDK (zero-code monkey-patching)
Any LLM provider API Gateway with input/output security pipelines

Any LLM provider works — OpenAI, Anthropic, Google, Azure, local Ollama. Security shouldn't depend on your model choice.


Installation Options

Method Command Best For
CLI installer pip install -e . && as-init Individual developers
Pre-commit hooks bash hooks/install.sh Teams wanting git-level protection
GitHub Action Add workflow YAML (see below) CI/CD pipeline scanning
Docker docker compose up Full stack deployment
VS Code extension Install .vsix from Releases Copilot/Cursor users
Chrome extension Load unpacked from chrome-extension/ ChatGPT/Claude/Gemini web users
Kubernetes helm install agsec helm/agnosticsecurity/ Production deployments

Architecture

Developer using Copilot / Claude Code / Cursor / LangChain
  │
  ├── VS Code Extension ──────── Context boundary — data never enters AI context
  │                                ├── .copilotignore / .cursorignore / .aiignore
  │                                ├── search.exclude (blocks @workspace)
  │                                ├── LM API interceptor (scans all prompts)
  │                                └── @security / @guard chat participants
  │
  ├── Chrome Extension ──────── DLP for web AI tools (ChatGPT, Claude, Gemini)
  │
  ├── Claude Code Hooks ──────── PreToolUse (Read/Edit/Write/Bash) + UserPromptSubmit
  │                                └── 4-layer: PII regex → intent → Pydantic → LLM
  │
  ├── Ingress Guard ──────────── 6-layer external agent defense
  │                                └── Fingerprint → behavior → risk → cross-session reputation → policy → headers
  │
  ├── API Gateway + LLM Proxy ── Input/output security pipelines
  │                                ├── PII redaction + injection detection
  │                                ├── Smart Router (5 strategies, 14 models, 4 providers)
  │                                └── Cost tracking + block rules
  │
  ├── Effect-Layer Defenses ──── Taint tracking, egress allowlist, lethal trifecta,
  │                                tool call guard, code fingerprint, vuln scanner,
  │                                shadow AI detector, privacy mode, knowledge graph
  │
  ├── Red Team Harness ──────── 55 attack agents in Docker, 100% detection rate
  │
  └── Breach Compliance ──────── Rule-based classification (<1ms, no LLM)
                                   ├── 13 fintech breach types (PCI-DSS, SOX, HIPAA)
                                   ├── Immutable SHA-256 audit log
                                   ├── Agent registration (email-verified)
                                   ├── Admin Console (RBAC, policy versioning)
                                   └── Real-time dashboard

Compliance

All controls mapped to industry frameworks with automated scoring:

  • OWASP Top 10 for LLM Applicationspython3 scripts/owasp_score.py
  • NIST AI Risk Management Frameworkpython3 scripts/nist_score.py
  • MITRE ATLAS — coverage across 274-attack red-team suite + 55-agent Docker harness
  • 50-check security auditpython3 scripts/audit.py

Testing

724+ automated tests across 24 suites, plus a Docker-based red-team harness:

source .venv/bin/activate

# Core tests (run any individually)
python3 scripts/smoke_test.py                 # 31 — file gate + content DLP
python3 scripts/test_red_team.py              # 52 — adversarial red-team
python3 scripts/test_pii_prompt_guard.py      # 18 — PII evasion
python3 scripts/test_outbound_guards.py       # 42 — obfuscation + egress
python3 scripts/test_ingress_guard.py         # 54 — ingress guard

# Red team Docker harness (55 agents, 10 OWASP categories)
docker compose -f docker-compose.redteam.yml up --build -d
docker exec agsec-attacker python3 /agents/run_all.py

# Continuous red-teaming with HTML reports
python3 scripts/continuous_red_team.py
python3 scripts/red_team_report.py

Privacy

  • Runs entirely on your machine — no telemetry, no analytics, no cloud dependency
  • Pluggable LLM backend — use local Ollama (air-gapped), Anthropic, or OpenAI for optional semantic analysis
  • Privacy mode — one-command killswitch for cloud LLM access (EA_PRIVACY_MODE=full_privacy)
  • Audit logs stored locally — SHA-256 checksummed, tamper-detected, JSONL archived

GitHub Action

# .github/workflows/security.yml
name: AgnosticSecurity Scan
on: [pull_request]

jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: kaushikdharamshi/AgnosticSecurity/.github/actions/security-scan@main
        with:
          severity-threshold: HIGH
          fail-on-findings: true
          scan-dlp: true
          scan-vulns: true

21 Components

Component What It Does
DLP Engine File gate + content scanning + exec guard + prompt analysis
API Gateway FastAPI proxy with input/output security pipelines
LLM Proxy Reverse proxy with cost tracking + block rules
Breach Engine Rule-based breach classification + immutable audit log
VS Code Extension Context boundary for Copilot/Cursor/Windsurf
Chrome Extension DLP for ChatGPT/Claude/Gemini web
Auto-Instrumentation SDK Zero-code LangChain/Autogen monitoring
Smart Router Task classification + 5 routing strategies + failover
Admin Console Centralized policy management + agent enrollment + RBAC
Ingress Guard 6-layer external agent defense middleware
Privacy Mode 3 enforced modes (local-only / balanced / permissive)
Knowledge Graph Agent-threat-incident relationship tracking
Vuln Scanner OWASP SAST-lite for AI-generated code
Code Fingerprint Guard Proprietary code leak prevention
Shadow AI Detector Discovers 12+ unauthorized AI tools
Security Memory Bridge Obsidian-compatible security event vault
Data Flow Taint Tracker Cross-session SHA-256 + n-gram Jaccard exfil detection
Lethal Trifecta Detector Blocks MCP tools when private data + untrusted input + external comm all active
Tool Call Guard DLP + taint scan on MCP/function call arguments
CLI Installer as-init auto-detects AI tools, configures protections
Red Team Harness 55 attack agents, 10 OWASP categories, Docker-isolated

Competitive Positioning

                    Pre-commit  In-IDE   AI Runtime   Post-incident  CI/CD    Browser   Inbound
                    (hooks)     (live)   (agent layer) (detection)   (Action) (web LLMs) (API defense)
GitGuardian         ========    ....     ........     ========       ======== ......    ........
Snyk                ........    ======== ........     ========       ======== ......    ........
Semgrep             ========    ======== ........     ........       ======== ......    ........
Prompt Armor        ........    ....     ========     ........       ........ ......    ........
Lakera Guard        ........    ....     ========     ........       ........ ......    ........
AgnosticSecurity    ========    ======== ========     ========       ======== ========  ========
                                         ^^^^^^^^                             ^^^^^^^^  ^^^^^^^^
                                         SHARED                               ONLY US   ONLY US

Configuration Reference

Environment variables

DLP Engine

Variable Default Description
EA_LLM_PROVIDER ollama LLM provider for semantic analysis (ollama, anthropic, openai)
EA_PRIVACY_MODE balanced Privacy mode (full_privacy, balanced, permissive)
EA_ALLOWED_DOMAINS Comma-separated egress allowlist
EA_LETHAL_TRIFECTA 1 Enable lethal trifecta detector
EA_INGRESS_GUARD 1 Enable ingress guard
EA_CODE_GUARD_ENABLED 1 Enable code fingerprint guard
EA_SHADOW_AI_ENABLED 1 Enable shadow AI detector
EA_KNOWLEDGE_GRAPH_ENABLED 1 Enable knowledge graph
EA_DLP_CONFIDENCE_THRESHOLD 0.5 Minimum confidence to flag PII

Gateway

Variable Default Description
GATEWAY_API_KEYS sk-gateway-changeme Comma-separated client API keys
OPENAI_API_KEY OpenAI API key
ANTHROPIC_API_KEY Anthropic API key
RATE_LIMIT_RPM 60 Max requests per minute per key

VS Code Extension

Setting Default Description
agnosticsecurity.enabled true Master toggle
agnosticsecurity.autoDisableCopilot true Disable Copilot for sensitive files
agnosticsecurity.sensitiveFileGlobs [defaults] Custom sensitive file patterns

Key Documents

Document What It Covers
Security Design Threat model, design decisions, failure modes
YC Application Product positioning, market, competitive landscape
OWASP LLM Mapping Control-to-risk compliance mapping
Admin Console Design Policy management architecture
Memory Agent Threats Memory poisoning, trust boundaries
GTM Strategy Go-to-market plan, partnerships, content
Competitive Analysis vs Lakera, Protect AI, Prompt Security, HiddenLayer

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agnostic_security-4.6.0-py3-none-any.whl (284.3 kB view details)

Uploaded Python 3

File details

Details for the file agnostic_security-4.6.0-py3-none-any.whl.

File metadata

File hashes

Hashes for agnostic_security-4.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 7ad3aac6a628f5e917bd6a0895c857816530b830d2a30651578bf0685bddf90f
MD5 2232a29633a642f2b797cbf2379b514c
BLAKE2b-256 48521100505282e5e104e800e0d1cf2e10479041668d04952d39009220671c16

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page