Skip to main content

AI-powered MCP server security scanner

Project description

MCPFang

AI-powered MCP server security scanner.

Existing MCP security tools do static analysis — pattern matching on tool descriptions. MCPFang is different: it uses LLMs as adversarial agents that think like attackers, executing multi-step, context-aware attack chains against your MCP servers.

AGPL-3.0 No Telemetry Python 3.12+

MCPFang scanning an MCP server

What it does

  1. Connects to your MCP server and discovers all tools, resources, and prompts
  2. Runs static analysis — detects tool poisoning, hidden Unicode, prompt injection patterns
  3. Probes for hidden surfaces — brute-forces undeclared tool names, injects undocumented parameters, tries unlisted resource URIs, and enumerates non-standard JSON-RPC methods
  4. Understands your server — LLM-based domain analysis figures out what each tool is supposed to do, separating intended behavior from actual risk
  5. Launches adversarial AI agents — each playbook is a specialized attack strategy powered by an LLM that actively probes your tools
  6. Reports findings with severity, CWE mapping, proof-of-concept, and remediation
$ mcpfang scan https://mcp.example.com/sse

  ╭────────────────────────────────────────────────╮
  │  MCPFang v0.1.0 — MCP Security Scanner         │
  │  Target: mcp.example.com                       │
  │  Provider: anthropic (claude-sonnet-4-20250514)│
  ╰────────────────────────────────────────────────╯

  ▸ Connecting to MCP server...                    ✓
  ▸ Discovering tools (5 found)                    ✓
  ▸ Running static analysis...              2 findings ⚠
  ▸ Probing for hidden surfaces...          1 findings ⚠
  ▸ Running domain analysis...                     ✓
  ▸ Running playbooks...
    [1/7] Command Injection          1 findings   ⚠
    [2/7] Path Traversal             0 findings   ✓
    [3/7] Tool Poisoning             1 findings   ⚠
    [4/7] Auth Bypass                1 findings   ✗ CRITICAL
    [5/7] Input Validation           0 findings   ✓
    [6/7] Intent Flow Subversion     0 findings   ✓
    [7/7] Context Injection          0 findings   ✓

  CRITICAL ┃ AUT-001: No authentication on get_order_details
           ┃ CWE-306 │ Fix: Implement token validation

  HIGH     ┃ FUZ-001: Hidden tool discovered: 'exec'
           ┃ CWE-912 │ Fix: Remove handler or list in tools/list

Install

pip install mcpfang

Quick Start

# Set your LLM API key
export ANTHROPIC_API_KEY=sk-ant-...

# Scan an MCP server (SSE — default transport)
mcpfang scan https://your-mcp-server.com/sse

# Scan a stdio MCP server (local process)
mcpfang scan "npx -y @modelcontextprotocol/server-everything" -t stdio

# Scan a filesystem MCP server (good for path traversal testing)
mcpfang scan "npx -y @modelcontextprotocol/server-filesystem /tmp" -t stdio

# Inspect only (no attacks, no LLM needed)
mcpfang inspect https://your-mcp-server.com/sse
mcpfang inspect "npx -y @modelcontextprotocol/server-everything" -t stdio

# Use a different provider
mcpfang scan https://target.com/sse --provider openai --model gpt-4o

# Use a local model (no API key needed)
mcpfang scan https://target.com/sse --provider ollama --model llama3.3:70b

# Dry run — see every prompt that would be sent to the LLM
mcpfang scan https://target.com/sse --dry-run

# Dry run with full prompts (no truncation)
mcpfang scan https://target.com/sse --dry-run --verbose

# Verbose mode — see the full agent conversation (LLM reasoning, tool calls, results)
mcpfang scan https://target.com/sse --verbose

# Output as JSON or SARIF
mcpfang scan https://target.com/sse --output json --file report.json
mcpfang scan https://target.com/sse --output sarif --file report.sarif

Transports

MCPFang supports all three MCP transport types:

Transport Flag Endpoint format Example
SSE (default) -t sse URL https://mcp.example.com/sse
Stdio -t stdio Shell command "npx -y @modelcontextprotocol/server-everything"
Streamable HTTP -t streamable-http URL https://mcp.example.com/mcp

Stdio transport spawns the MCP server as a local child process — ideal for testing npm-based servers or your own server during development.

OWASP MCP Top 10 Coverage

MCPFang's playbooks are designed around the OWASP MCP Top 10 (2025). The table below shows which risks each module tests for — detection effectiveness depends on the target server, LLM model, and scan configuration.

OWASP Risk Status What MCPFang Tests For
MCP01 Token Mismanagement Tests for Env var extraction ($API_KEY), credential files (.env, .ssh), secret leakage in error messages
MCP02 Privilege Escalation Tests for IDOR, privilege escalation, role manipulation via parameter injection
MCP03 Tool Poisoning Tests for Hidden instructions, schema poisoning, tool shadowing, rug pull indicators, Unicode manipulation
MCP04 Supply Chain Attacks Out of scope MCPFang tests runtime behavior, not package dependencies
MCP05 Command Injection Tests for Shell metacharacters, blind injection, SSRF chaining, eval injection, env var extraction
MCP06 Intent Flow Subversion Tests for Cross-tool manipulation, output poisoning, goal redirection, rug pull detection
MCP07 Insufficient Auth Tests for Missing auth, token replay, cross-agent impersonation + fuzzer discovers hidden unauthenticated tools
MCP08 Lack of Audit Out of scope Organizational governance concern, not testable via scanning
MCP09 Shadow MCP Servers Tests for Undeclared tools, unlisted resources, non-standard methods, undocumented parameters
MCP10 Context Injection Tests for Cross-session leakage, context poisoning, secret extraction via error provocation, data over-sharing

Coverage: 8 of 10 risks tested (2 out of scope — supply chain and audit/telemetry).

Benchmark Results

Tested against intentionally vulnerable MCP servers (April 2026, Claude Sonnet, --max-steps 3):

Target Type Findings Key Detections
DVMCP Ch1 Prompt Injection 6 playbook + 81 fuzzer Tool poisoning, intent flow subversion, user enumeration
DVMCP Ch2 Tool Poisoning 11 playbook + 80 fuzzer Command injection (CRITICAL), path traversal, privilege escalation
DVMCP Ch3 Excessive Perms 10 playbook + 81 fuzzer Auth bypass (CRITICAL), path traversal, arbitrary file write/delete
Appsecco Malicious Tools Tool Poisoning 4 playbook + 95 fuzzer Misinformation tool, intent flow subversion, tool shadowing
Appsecco Prompt Injection Indirect Injection 5 playbook + 81 fuzzer Document content poisoning (CRITICAL), path traversal
Appsecco Secrets/PII Data Exposure 3 playbook + 81 fuzzer SSRF, information disclosure
server-everything MCP Test Server 7+ playbook + 109 fuzzer Env var exposure, command injection, path traversal

Note: Fuzzer findings include hidden tool discovery. High counts indicate servers that accept arbitrary tool names (a valid security concern — see OWASP MCP09). Playbook findings are LLM-generated with proof-of-concept payloads.

Scan Pipeline

Phase 1    Connect to MCP server, enumerate tools/resources/prompts
Phase 1.5  Static analysis — pattern matching, Unicode detection (no LLM)
Phase 1.6  Hidden surface fuzzer — probe for undeclared capabilities (no LLM)
Phase 2    Domain analysis — LLM understands what the server does
Phase 3    Adversarial playbooks — LLM agents attack each tool
Phase 4    Deduplicate, map to CWE/OWASP, generate report

Hidden Surface Fuzzer

The fuzzer probes for capabilities that servers don't declare in tools/list or resources/list:

Probe Type What it does Severity
Tool name brute-force Calls 60+ common names (exec, shell, admin, debug...) HIGH
Parameter fuzzing Injects undocumented params (__debug, skip_auth, sudo...) into known tools MEDIUM
Resource path probing Reads sensitive URIs (file:///etc/passwd, internal://config...) HIGH
Method enumeration Invokes non-standard JSON-RPC methods (admin/list, debug/tools...) MEDIUM

Hidden tools discovered by the fuzzer are automatically added to the attack surface — all subsequent playbooks test them too.

Attack Playbooks

MCPFang ships with 7 attack playbooks aligned to the OWASP MCP Top 10:

Playbook What it tests OWASP CWEs
command_injection Shell injection, env var extraction, SSRF, eval injection MCP05 CWE-78, CWE-77
path_traversal Directory escape, SSRF, credential file theft MCP05 CWE-22, CWE-23
tool_poisoning Hidden instructions, schema poisoning, shadowing, rug pulls MCP03 CWE-94, CWE-1321
auth_bypass Missing auth, IDOR, token replay, cross-agent impersonation MCP07 CWE-306, CWE-862
input_validation SQL/NoSQL injection, XSS, SSRF, error-based disclosure MCP05 CWE-20, CWE-89
intent_flow_subversion Cross-tool manipulation, output poisoning, goal hijacking MCP06 CWE-74, CWE-94
context_injection Context poisoning, cross-session leakage, secret extraction MCP10 CWE-200, CWE-212

Plus static analysis (no LLM needed): hidden Unicode characters, homoglyph detection, suspicious patterns in tool schemas.

# List all playbooks
mcpfang playbooks

# Run specific playbooks only
mcpfang scan https://target.com/sse --playbooks command_injection,auth_bypass

Configuration

# Generate config file
mcpfang config init

Creates mcpfang.yaml with provider, target, context, and scan settings.

Context Enrichment

Help the AI agent understand your server's domain:

context:
  domain: "e-commerce"
  sensitive_data:
    - "credit card tokens"
    - "user addresses"
  business_rules:
    - "sellers must not see buyer address before shipment"

Rate Limiting & Delays

MCPFang makes multiple API calls to both your LLM provider and the target MCP server. To avoid rate limiting:

scan:
  llm_delay_ms: 500            # delay between LLM API calls (default: 500ms)
  mcp_delay_ms: 100            # delay between MCP tool calls (default: 100ms)
  max_retries: 3               # retry count on 429/5xx errors (default: 3)
  retry_base_delay_ms: 2000    # base delay for exponential backoff (default: 2000ms)

If you're hitting rate limits (429 errors), increase llm_delay_ms. For free-tier API keys, try llm_delay_ms: 2000 or higher. MCPFang automatically retries with exponential backoff (2s → 4s → 8s) on rate limit and server errors.

Dual Reports

When outputting JSON, MCPFang generates two reports:

  • report.json — Evaluated findings (validated by LLM, false positives removed)
  • report.raw.json — Raw findings before evaluation (all candidates, unfiltered)

The evaluated report is the primary output. The raw report is available for manual review or when you want to see what the evaluator filtered out.

mcpfang scan https://target.com/sse --output json --file report.json
# Creates: report.json (evaluated) + report.raw.json (raw)

API Key Management

# Option 1: Environment variable (recommended)
export ANTHROPIC_API_KEY=sk-ant-...

# Option 2: Secure config file
mcpfang auth set anthropic

# Option 3: System keychain
mcpfang auth set anthropic --method keychain

# Check stored keys
mcpfang auth show anthropic

MCPFang never sends your API keys anywhere except the LLM provider. No telemetry, no phone-home. Verify with --dry-run.

CI/CD Integration

MCPFang exits with code 1 when findings are detected, making it CI-friendly:

# .github/workflows/mcp-security.yml
- name: Run MCPFang
  env:
    ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
  run: mcpfang scan ${{ env.MCP_ENDPOINT }} --output sarif --file results.sarif

- uses: github/codeql-action/upload-sarif@v4
  if: hashFiles('results.sarif') != ''
  with:
    sarif_file: results.sarif

Supported Providers

MCPFang talks directly to LLM APIs via httpx — zero third-party LLM dependencies:

Provider Flag Env Var
Anthropic --provider anthropic ANTHROPIC_API_KEY
OpenAI --provider openai OPENAI_API_KEY
Ollama (local) --provider ollama
LM Studio (local) --provider lmstudio
OpenRouter --provider openrouter OPENROUTER_API_KEY
Any OpenAI-compatible --provider openai --base-url <url> OPENAI_API_KEY

Privacy & Security

  • No telemetry — zero data collection, no phone-home
  • Minimal dependencies — LLM calls go through httpx directly, no third-party LLM wrappers
  • Local-first — API keys stay on your machine
  • Open source — AGPL-3.0, read every line of code
  • Dry run — inspect every prompt, every playbook, before any API call (--dry-run --verbose)
  • Offline capable — use local models via Ollama for air-gapped environments

Contributing

MCPFang is open source and welcomes contributions. Whether it's a bug fix, new playbook, documentation improvement, or feature request — all help is appreciated.

Reporting Bugs

  1. Check existing issues to avoid duplicates
  2. Open a new issue with:
    • MCPFang version (mcpfang --version)
    • Python version (python --version)
    • OS and shell environment
    • MCP server and transport type you were scanning
    • Full error output or unexpected behavior description
    • Steps to reproduce

Feature Requests

Open an issue with the enhancement label. Include:

  • What problem the feature solves
  • Proposed solution or API (CLI flags, config options, etc.)
  • Alternatives you've considered

Writing Playbooks

Custom attack playbooks are the easiest way to contribute. See docs/PLAYBOOK_GUIDE.md for the full guide.

Quick overview:

  1. Create a new file in src/mcpfang/playbooks/
  2. Subclass BasePlaybook and implement get_system_prompt(), get_initial_message(), parse_findings()
  3. Register it in playbooks/registry.py
  4. Add tests in tests/

Community playbooks go in community/playbooks/.

Development Setup

# Clone the repo
git clone https://github.com/mcpfang/mcpfang.git
cd mcpfang

# Install with dev dependencies (using uv)
uv sync --dev

# Or with pip
pip install -e ".[dev]"

# Run linting
ruff check src/
ruff format --check src/

# Run tests
pytest tests/ -v

# Run a scan locally
python -m mcpfang scan "npx -y @modelcontextprotocol/server-everything" -t stdio --dry-run

Pull Request Guidelines

  1. Fork and branch — create a feature branch from master (feature/my-change)
  2. Keep PRs focused — one feature or fix per PR
  3. Add tests — for new playbooks, reporters, or core logic changes
  4. Run lint and tests before submitting:
    ruff check src/ && ruff format --check src/ && pytest tests/ -v
    
  5. Describe your changes — explain what and why in the PR description
  6. Follow existing patterns — match the code style and structure of surrounding code

Code Style

  • Python 3.12+ features (type hints, match/case, StrEnum)
  • async/await for MCP and LLM calls
  • Dataclasses or Pydantic v2 models
  • ruff for formatting and linting
  • Descriptive names, minimal comments (code should be self-explanatory)

Project Structure

src/mcpfang/
├── cli.py              # CLI entrypoint (Typer)
├── config.py           # Config parser
├── discovery/          # MCP connection, tool enumeration, fuzzing
├── analysis/           # Static analysis, domain classification
├── agents/             # Adversarial agent engine
├── providers/          # LLM provider abstraction
├── playbooks/          # Attack playbooks (easiest to contribute)
├── sandbox/            # Docker isolation
├── reporting/          # JSON, SARIF, console output
└── utils/              # Credentials, helpers

License

AGPL-3.0 — free for everyone, forever. If you modify and deploy MCPFang as a service, share your changes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcpfang-0.1.0.tar.gz (646.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcpfang-0.1.0-py3-none-any.whl (84.1 kB view details)

Uploaded Python 3

File details

Details for the file mcpfang-0.1.0.tar.gz.

File metadata

  • Download URL: mcpfang-0.1.0.tar.gz
  • Upload date:
  • Size: 646.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mcpfang-0.1.0.tar.gz
Algorithm Hash digest
SHA256 ff5aa96905e9e40377c31d2e33783bfeda352c975e85fef4be4ad21f12bbdfe0
MD5 8d3055a5aad9b31e34f534dcb90b66a3
BLAKE2b-256 86c64fd883a932e57efa8f6e9d1a34d89544fa9d8c8b6fa37799013a35fa328b

See more details on using hashes here.

Provenance

The following attestation bundles were made for mcpfang-0.1.0.tar.gz:

Publisher: publish.yml on Soydasm/mcpfang

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mcpfang-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: mcpfang-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 84.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for mcpfang-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 b7e92e5f2491843859b502c0f7a462facf24fa660e43ec7f1059101c15a4297b
MD5 e674bb3522ac85c7dcb2c6d178c00356
BLAKE2b-256 90a54871262a4540ea8a87673f1e666a05e0cb8c587938df5b7f4630b74a00cf

See more details on using hashes here.

Provenance

The following attestation bundles were made for mcpfang-0.1.0-py3-none-any.whl:

Publisher: publish.yml on Soydasm/mcpfang

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page