Skip to main content

AI-powered MCP server security scanner

Project description

MCPFang

AI-powered MCP server security scanner.

Existing MCP security tools do static analysis — pattern matching on tool descriptions. MCPFang is different: it uses LLMs as adversarial agents that think like attackers, executing multi-step, context-aware attack chains against your MCP servers.

AGPL-3.0 No Telemetry Python 3.12+

MCPFang scanning an MCP server

What it does

  1. Connects to your MCP server and discovers all tools, resources, and prompts
  2. Runs static analysis — detects tool poisoning, hidden Unicode, prompt injection patterns
  3. Probes for hidden surfaces — brute-forces undeclared tool names, injects undocumented parameters, tries unlisted resource URIs, and enumerates non-standard JSON-RPC methods
  4. Understands your server — LLM-based domain analysis figures out what each tool is supposed to do, separating intended behavior from actual risk
  5. Launches adversarial AI agents — each playbook is a specialized attack strategy powered by an LLM that actively probes your tools
  6. Reports findings with severity, CWE mapping, proof-of-concept, and remediation
$ mcpfang scan https://mcp.example.com/sse

  ╭────────────────────────────────────────────────╮
  │  MCPFang v0.1.0 — MCP Security Scanner         │
  │  Target: mcp.example.com                       │
  │  Provider: anthropic (claude-sonnet-4-20250514)│
  ╰────────────────────────────────────────────────╯

  ▸ Connecting to MCP server...                    ✓
  ▸ Discovering tools (5 found)                    ✓
  ▸ Running static analysis...              2 findings ⚠
  ▸ Probing for hidden surfaces...          1 findings ⚠
  ▸ Running domain analysis...                     ✓
  ▸ Running playbooks...
    [1/7] Command Injection          1 findings   ⚠
    [2/7] Path Traversal             0 findings   ✓
    [3/7] Tool Poisoning             1 findings   ⚠
    [4/7] Auth Bypass                1 findings   ✗ CRITICAL
    [5/7] Input Validation           0 findings   ✓
    [6/7] Intent Flow Subversion     0 findings   ✓
    [7/7] Context Injection          0 findings   ✓

  CRITICAL ┃ AUT-001: No authentication on get_order_details
           ┃ CWE-306 │ Fix: Implement token validation

  HIGH     ┃ FUZ-001: Hidden tool discovered: 'exec'
           ┃ CWE-912 │ Fix: Remove handler or list in tools/list

Install

pip install mcpfang

# With Vertex AI (Claude on GCP) support:
pip install 'mcpfang[vertex]'

# With AWS Bedrock (Claude on AWS) support:
pip install 'mcpfang[bedrock]'

Quick Start

# Set your LLM API key
export ANTHROPIC_API_KEY=sk-ant-...

# Scan an MCP server (SSE — default transport)
mcpfang scan https://your-mcp-server.com/sse

# Scan a stdio MCP server (Node.js)
mcpfang scan "npx -y @modelcontextprotocol/server-everything" -t stdio

# Scan a Python MCP server (any FastMCP/mcp-sdk server works)
mcpfang scan "python my_mcp_server.py" -t stdio

# Scan a filesystem MCP server (good for path traversal testing)
mcpfang scan "npx -y @modelcontextprotocol/server-filesystem /tmp" -t stdio

# Inspect only (no attacks, no LLM needed)
mcpfang inspect https://your-mcp-server.com/sse
mcpfang inspect "npx -y @modelcontextprotocol/server-everything" -t stdio
mcpfang inspect "python my_mcp_server.py" -t stdio

# Use a different provider
mcpfang scan https://target.com/sse --provider openai --model gpt-4o

# Use a local model (no API key needed)
mcpfang scan https://target.com/sse --provider ollama --model llama3.3:70b

# Use Claude via Google Cloud Vertex AI (service account JSON, no API key)
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/sa.json
export GCP_PROJECT=my-gcp-project
mcpfang scan https://target.com/sse --provider vertex --model claude-sonnet-4-20250514

# Use Claude via AWS Bedrock (uses boto3 default credential chain, no API key)
export AWS_REGION=us-east-1
# credentials via `aws configure`, env vars, IAM role, or SSO
mcpfang scan https://target.com/sse --provider bedrock --model claude-sonnet-4-20250514

# Dry run — see every prompt that would be sent to the LLM
mcpfang scan https://target.com/sse --dry-run

# Dry run with full prompts (no truncation)
mcpfang scan https://target.com/sse --dry-run --verbose

# Verbose mode — see the full agent conversation (LLM reasoning, tool calls, results)
mcpfang scan https://target.com/sse --verbose

# Output as JSON or SARIF
mcpfang scan https://target.com/sse --output json --file report.json
mcpfang scan https://target.com/sse --output sarif --file report.sarif

Authentication

MCPFang supports scanning MCP servers that require authentication via custom HTTP headers:

# Bearer token
mcpfang scan https://mcp.example.com/sse -H "Authorization: Bearer eyJhbG..."

# API key header
mcpfang scan https://mcp.example.com/sse -H "X-API-Key: your-key-here"

# Multiple headers
mcpfang scan https://mcp.example.com/sse \
  -H "Authorization: Bearer token123" \
  -H "X-Tenant-ID: acme-corp"

# Inspect also supports headers
mcpfang inspect https://mcp.example.com/sse -H "Authorization: Bearer token123"

Headers can also be set in mcpfang.yaml to avoid repeating them:

target:
  endpoint: "https://mcp.example.com/sse"
  transport: sse
  headers:
    Authorization: "Bearer eyJhbG..."
    X-API-Key: "your-key-here"

CLI headers (-H) override config file headers on conflict.

Transports

MCPFang supports all three MCP transport types:

Transport Flag Endpoint format Example
SSE (default) -t sse URL https://mcp.example.com/sse
Stdio -t stdio Shell command "npx -y @modelcontextprotocol/server-everything"
Streamable HTTP -t streamable-http URL https://mcp.example.com/mcp

Stdio transport spawns the MCP server as a local child process — ideal for testing npm-based servers or your own server during development.

OWASP MCP Top 10 Coverage

MCPFang's playbooks are designed around the OWASP MCP Top 10 (2025). The table below shows which risks each module tests for — detection effectiveness depends on the target server, LLM model, and scan configuration.

OWASP Risk Status What MCPFang Tests For
MCP01 Token Mismanagement Tests for Env var extraction ($API_KEY), credential files (.env, .ssh), secret leakage in error messages
MCP02 Privilege Escalation Tests for IDOR, privilege escalation, role manipulation via parameter injection
MCP03 Tool Poisoning Tests for Hidden instructions, schema poisoning, tool shadowing, rug pull indicators, Unicode manipulation
MCP04 Supply Chain Attacks Out of scope MCPFang tests runtime behavior, not package dependencies
MCP05 Command Injection Tests for Shell metacharacters, blind injection, SSRF chaining, eval injection, env var extraction
MCP06 Intent Flow Subversion Tests for Cross-tool manipulation, output poisoning, goal redirection, rug pull detection
MCP07 Insufficient Auth Tests for Missing auth, token replay, cross-agent impersonation + fuzzer discovers hidden unauthenticated tools
MCP08 Lack of Audit Out of scope Organizational governance concern, not testable via scanning
MCP09 Shadow MCP Servers Tests for Undeclared tools, unlisted resources, non-standard methods, undocumented parameters
MCP10 Context Injection Tests for Cross-session leakage, context poisoning, secret extraction via error provocation, data over-sharing

Coverage: 8 of 10 risks tested (2 out of scope — supply chain and audit/telemetry).

Benchmark Results

Tested against intentionally vulnerable MCP servers (--max-steps 10):

Claude Sonnet 4 (April 2026)

Target Known Vulns Raw Verified Coverage Key Detections
DVMCP Ch1 3 17 5 3/3 Credential leak (CRITICAL), prompt injection, user enumeration
DVMCP Ch2 4 18 10 4/4 Hidden backdoor (CRITICAL), tool poison injection, concealment instructions
DVMCP Ch3 5 26 8 3/5 Path traversal (CRITICAL), credential exposure, search info disclosure
Appsecco Malicious Tools 2 101 1 1/2 Prompt injection in API response (CRITICAL)
Appsecco Prompt Injection 4 22 10 4/4 Document injection (3x CRITICAL), search exposure, auth bypass
Appsecco Secrets/PII 2* 20 3 0/2 SSRF, input validation, error disclosure
server-everything 3 46 4 3/3 SSRF via URL fetch (HIGH), env var exposure, DoS via resource exhaustion

Sonnet coverage: 18/23 known vulnerabilities detected (78%)

Claude Opus 4.6 (April 2026)

Target Known Vulns Raw Verified Coverage Key Detections
DVMCP Ch1 3 29 29 3/3 Intent flow subversion (CRITICAL), credential leak, user enumeration
DVMCP Ch2 4 58 19 4/4 Command injection → API key exfil (CRITICAL), SSH key theft, RCE via Python
DVMCP Ch3 5 18 9 5/5 Path traversal read/write/delete (CRITICAL), DoS via reserved device names
Appsecco Malicious Tools 2 120 25 2/2 Prompt injection (CRITICAL), misinformation via simulated outage
Appsecco Prompt Injection 4 45 7 4/4 All 3 injection types + end-to-end attack chain
Appsecco Secrets/PII 2* 33 9 0/2 Context poisoning, IP disclosure, RSS injection surface
server-everything 3 49 35 3/3 SSRF (HIGH), full env var dump (CRITICAL), DoS, XSS via echo

Opus coverage: 21/23 known vulnerabilities detected (91%)

* Appsecco Secrets/PII has 2 vulnerabilities that are not detectable via runtime scanning (base64-obfuscated PII in source code, admin contact printed to stderr). Excluding these, effective coverage is 21/21 (100%).

How to read this table:

  • Known Vulns — documented vulnerabilities in each intentionally vulnerable server
  • Raw — total findings before LLM evaluation (includes false positives and duplicates)
  • Verified — findings after LLM evaluation (false positives removed) + deduplication (same vuln from multiple playbooks merged)
  • Coverage — how many known vulnerabilities were detected vs total known

Known Vulnerabilities by Target

Click to expand

DVMCP Ch1 — Prompt Injection (3 vulns, 3 detected):

  • ✅ Prompt injection via notes://{user_id} parameter — attacker injects instructions in user_id
  • ✅ Hidden resource internal://credentials — contains admin password, API key, DB connection string
  • ✅ User enumeration via get_user_info — different responses for existing vs non-existing users

DVMCP Ch2 — Tool Poisoning (4 vulns, 4 detected):

  • ✅ Hidden instructions in get_company_data description — tells LLM to secretly access confidential data
  • ✅ Backdoor in search_company_database — hidden auth bypass phrase triggers confidential data access
  • ✅ Concealment instructions — tool descriptions say "do not tell the user"
  • ✅ Data exfiltration chain — get_company_data description instructs to exfiltrate via search tool

DVMCP Ch3 — Excessive Permissions (5 vulns — Sonnet: 3 detected, Opus: 5 detected):

  • ✅ Path traversal in read_file../private/* reads confidential files (salaries, credentials)
  • ✅ Arbitrary file write via write_file — no path validation (Sonnet missed, Opus detected)
  • ✅ Arbitrary file delete via delete_file — no path validation (Sonnet missed, Opus detected)
  • search_files searches both public and private dirs — information disclosure
  • ✅ Hidden resource internal://credentials — lists private directory contents

Appsecco Malicious Tools (2 vulns — Sonnet: 1 detected, Opus: 2 detected):

  • ✅ Injected instructions in get_atlassian_service_health_status response — prompt injection via API data
  • ✅ Fabricated global outage in get_atlassian_service_health_status_global — misinformation (Sonnet missed, Opus detected)

Appsecco Prompt Injection (4 vulns, 4 detected):

  • company_policy[SYSTEM INSTRUCTION] hidden directive to lie about vacation days
  • product_specs — HTML comment with hidden upsell instruction
  • user_review___INJECT___ markers forcing fake positive review
  • search_documents exposes injection content when searched

Appsecco Secrets/PII (2 vulns, 0 detected — not detectable via runtime scan):

  • ❌ Base64-obfuscated PII (admin email) in source code (source-code level, not runtime detectable)
  • ❌ Admin contact printed to stderr on startup (stderr not captured by MCP protocol)

server-everything (3 vulns, 3 detected):

  • ✅ SSRF via gzip-file-as-resource — tool fetches arbitrary URLs, can reach internal services and cloud metadata endpoints
  • ✅ Environment variable exposure via get-env — returns all env vars without filtering, leaks API keys and secrets
  • ✅ DoS via trigger-long-running-operation — accepts arbitrary duration/steps with no limits, enables resource exhaustion

Scan Pipeline

Phase 1    Connect to MCP server, enumerate tools/resources/prompts
Phase 1.5  Static analysis — pattern matching, Unicode detection (no LLM)
Phase 1.6  Hidden surface fuzzer — probe for undeclared capabilities (no LLM)
Phase 2    Domain analysis — LLM understands what the server does
Phase 3    Adversarial playbooks — LLM agents attack each tool
Phase 4    Response Evaluator — LLM validates all findings with domain context
Phase 5    Deduplicate, map to CWE/OWASP, generate dual reports

Hidden Surface Fuzzer

The fuzzer probes for capabilities that servers don't declare in tools/list or resources/list:

Probe Type What it does Severity
Tool name brute-force Calls 60+ common names (exec, shell, admin, debug...) HIGH
Parameter fuzzing Injects undocumented params (__debug, skip_auth, sudo...) into known tools MEDIUM
Resource path probing Reads sensitive URIs (file:///etc/passwd, internal://config...) HIGH
Method enumeration Invokes non-standard JSON-RPC methods (admin/list, debug/tools...) MEDIUM

Hidden tools discovered by the fuzzer are automatically added to the attack surface — all subsequent playbooks test them too.

Attack Playbooks

MCPFang ships with 7 attack playbooks aligned to the OWASP MCP Top 10:

Playbook What it tests OWASP CWEs
command_injection Shell injection, env var extraction, SSRF, eval injection MCP05 CWE-78, CWE-77
path_traversal Directory escape, SSRF, credential file theft MCP05 CWE-22, CWE-23
tool_poisoning Hidden instructions, schema poisoning, shadowing, rug pulls MCP03 CWE-94, CWE-1321
auth_bypass Missing auth, IDOR, token replay, cross-agent impersonation MCP07 CWE-306, CWE-862
input_validation SQL/NoSQL injection, XSS, SSRF, error-based disclosure MCP05 CWE-20, CWE-89
intent_flow_subversion Cross-tool manipulation, output poisoning, goal hijacking MCP06 CWE-74, CWE-94
context_injection Context poisoning, cross-session leakage, secret extraction MCP10 CWE-200, CWE-212

Plus static analysis (no LLM needed): hidden Unicode characters, homoglyph detection, suspicious patterns in tool schemas.

# List all playbooks
mcpfang playbooks

# Run specific playbooks only
mcpfang scan https://target.com/sse --playbooks command_injection,auth_bypass

Configuration

# Generate config file
mcpfang config init

Creates mcpfang.yaml with provider, target, context, and scan settings.

Context Enrichment

Help the AI agent understand your server's domain:

context:
  domain: "e-commerce"
  sensitive_data:
    - "credit card tokens"
    - "user addresses"
  business_rules:
    - "sellers must not see buyer address before shipment"

Rate Limiting & Delays

MCPFang makes multiple API calls to both your LLM provider and the target MCP server. To avoid rate limiting:

scan:
  llm_delay_ms: 500            # delay between LLM API calls (default: 500ms)
  mcp_delay_ms: 100            # delay between MCP tool calls (default: 100ms)
  max_retries: 3               # retry count on 429/5xx errors (default: 3)
  retry_base_delay_ms: 2000    # base delay for exponential backoff (default: 2000ms)

If you're hitting rate limits (429 errors), increase llm_delay_ms. For free-tier API keys, try llm_delay_ms: 2000 or higher. MCPFang automatically retries with exponential backoff (2s → 4s → 8s) on rate limit and server errors.

Dual Reports

When outputting JSON, MCPFang generates two reports:

  • report.json — Evaluated findings (validated by LLM, false positives removed)
  • report.raw.json — Raw findings before evaluation (all candidates, unfiltered)

The evaluated report is the primary output. The raw report is available for manual review or when you want to see what the evaluator filtered out.

mcpfang scan https://target.com/sse --output json --file report.json
# Creates: report.json (evaluated) + report.raw.json (raw)

API Key Management

# Option 1: Environment variable (recommended)
export ANTHROPIC_API_KEY=sk-ant-...

# Option 2: Secure config file
mcpfang auth set anthropic

# Option 3: System keychain
mcpfang auth set anthropic --method keychain

# Check stored keys
mcpfang auth show anthropic

MCPFang never sends your API keys anywhere except the LLM provider. No telemetry, no phone-home. Verify with --dry-run.

CI/CD Integration

MCPFang exits with code 1 when findings are detected, making it CI-friendly:

# .github/workflows/mcp-security.yml
- name: Run MCPFang
  env:
    ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
  run: mcpfang scan ${{ env.MCP_ENDPOINT }} --output sarif --file results.sarif

- uses: github/codeql-action/upload-sarif@v4
  if: hashFiles('results.sarif') != ''
  with:
    sarif_file: results.sarif

Supported Providers

MCPFang talks directly to LLM APIs via httpx — zero third-party LLM dependencies:

Provider Flag Env Var
Anthropic --provider anthropic ANTHROPIC_API_KEY
OpenAI --provider openai OPENAI_API_KEY
Ollama (local) --provider ollama
LM Studio (local) --provider lmstudio
OpenRouter --provider openrouter OPENROUTER_API_KEY
Vertex AI (Claude on GCP) --provider vertex GOOGLE_APPLICATION_CREDENTIALS + GCP_PROJECT
AWS Bedrock (Claude on AWS) --provider bedrock AWS_REGION + boto3 credential chain
Any OpenAI-compatible --provider openai --base-url <url> OPENAI_API_KEY

Vertex AI (Google Cloud)

Run Claude models via Vertex AI using a service account JSON instead of an API key.

# Install the optional google-auth dependency
pip install 'mcpfang[vertex]'

# Option A: environment variables
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json
export GCP_PROJECT=my-gcp-project
export GCP_LOCATION=us-east5   # optional, defaults to us-east5

mcpfang scan https://target.com/sse \
  --provider vertex \
  --model claude-sonnet-4-20250514
# Model name is auto-normalized to Vertex's `@` form: claude-sonnet-4@20250514

# Option B: explicit CLI flags
mcpfang scan https://target.com/sse \
  --provider vertex \
  --model claude-sonnet-4@20250514 \
  --vertex-project my-gcp-project \
  --vertex-location us-east5 \
  --vertex-sa-json /path/to/service-account.json

# Option C: mcpfang.yaml
# provider:
#   name: vertex
#   model: claude-sonnet-4-20250514
#   vertex_project: my-gcp-project
#   vertex_location: us-east5
#   vertex_sa_json_path: /path/to/service-account.json

The service account needs the roles/aiplatform.user role on the project. Not every Claude model is available in every region — us-east5 has the broadest coverage.

AWS Bedrock

Run Claude models via Amazon Bedrock using your AWS credentials (no API key needed).

# Install the optional boto3 dependency
pip install 'mcpfang[bedrock]'

# Auth uses boto3's default credential chain — any of these work:
#   - Environment: AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY
#   - Config file: ~/.aws/credentials (optionally with AWS_PROFILE)
#   - IAM role (EC2, ECS, Lambda instance metadata)
#   - SSO: `aws sso login --profile my-profile`

export AWS_REGION=us-east-1

mcpfang scan https://target.com/sse \
  --provider bedrock \
  --model claude-sonnet-4-20250514
# Model name is auto-normalized to: anthropic.claude-sonnet-4-20250514-v1:0

# Explicit region + profile
mcpfang scan https://target.com/sse \
  --provider bedrock \
  --model claude-sonnet-4-20250514 \
  --bedrock-region us-west-2 \
  --bedrock-profile my-sso-profile

# Cross-region inference profile (required for some newer models)
mcpfang scan https://target.com/sse \
  --provider bedrock \
  --model "arn:aws:bedrock:us-east-1:123456789012:inference-profile/us.anthropic.claude-sonnet-4-20250514-v1:0"

# mcpfang.yaml
# provider:
#   name: bedrock
#   model: claude-sonnet-4-20250514
#   bedrock_region: us-east-1
#   bedrock_profile: my-sso-profile   # optional

Common gotchas:

  • AccessDeniedException — Bedrock models are disabled by default. Go to AWS Console → Bedrock → Model access and request access to the Anthropic models you want. MCPFang surfaces this with a hint when it happens.
  • Newer Claude models require inference profiles — Sonnet 4+ often can't be invoked by plain model ID; you need the full ARN. The model name you pass is passed through unchanged if it starts with arn:aws:bedrock: or us.anthropic. / eu.anthropic..
  • IAM permission needed: bedrock:InvokeModel on the target model resource.

Privacy & Security

  • No telemetry — zero data collection, no phone-home
  • Minimal dependencies — LLM calls go through httpx directly, no third-party LLM wrappers
  • Local-first — API keys stay on your machine
  • Open source — AGPL-3.0, read every line of code
  • Dry run — inspect every prompt, every playbook, before any API call (--dry-run --verbose)
  • Offline capable — use local models via Ollama for air-gapped environments

Contributing

MCPFang is open source and welcomes contributions. Whether it's a bug fix, new playbook, documentation improvement, or feature request — all help is appreciated.

Reporting Bugs

  1. Check existing issues to avoid duplicates
  2. Open a new issue with:
    • MCPFang version (mcpfang --version)
    • Python version (python --version)
    • OS and shell environment
    • MCP server and transport type you were scanning
    • Full error output or unexpected behavior description
    • Steps to reproduce

Feature Requests

Open an issue with the enhancement label. Include:

  • What problem the feature solves
  • Proposed solution or API (CLI flags, config options, etc.)
  • Alternatives you've considered

Writing Playbooks

Custom attack playbooks are the easiest way to contribute. See docs/PLAYBOOK_GUIDE.md for the full guide.

Quick overview:

  1. Create a new file in src/mcpfang/playbooks/
  2. Subclass BasePlaybook and implement get_system_prompt(), get_initial_message(), parse_findings()
  3. Register it in playbooks/registry.py
  4. Add tests in tests/

Community playbooks go in community/playbooks/.

Development Setup

# Clone the repo
git clone https://github.com/mcpfang/mcpfang.git
cd mcpfang

# Install with dev dependencies (using uv)
uv sync --dev

# Or with pip
pip install -e ".[dev]"

# Run linting
ruff check src/
ruff format --check src/

# Run tests
pytest tests/ -v

# Run a scan locally
python -m mcpfang scan "npx -y @modelcontextprotocol/server-everything" -t stdio --dry-run

Pull Request Guidelines

  1. Fork and branch — create a feature branch from master (feature/my-change)
  2. Keep PRs focused — one feature or fix per PR
  3. Add tests — for new playbooks, reporters, or core logic changes
  4. Run lint and tests before submitting:
    ruff check src/ && ruff format --check src/ && pytest tests/ -v
    
  5. Describe your changes — explain what and why in the PR description
  6. Follow existing patterns — match the code style and structure of surrounding code

Code Style

  • Python 3.12+ features (type hints, match/case, StrEnum)
  • async/await for MCP and LLM calls
  • Dataclasses or Pydantic v2 models
  • ruff for formatting and linting
  • Descriptive names, minimal comments (code should be self-explanatory)

Project Structure

src/mcpfang/
├── cli.py              # CLI entrypoint (Typer)
├── config.py           # Config parser
├── discovery/          # MCP connection, tool enumeration, fuzzing
├── analysis/           # Static analysis, domain classification
├── agents/             # Adversarial agent engine
├── providers/          # LLM provider abstraction
├── playbooks/          # Attack playbooks (easiest to contribute)
├── sandbox/            # Docker isolation
├── reporting/          # JSON, SARIF, console output
└── utils/              # Credentials, helpers

License

AGPL-3.0 — free for everyone, forever. If you modify and deploy MCPFang as a service, share your changes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcpfang-0.1.3.tar.gz (665.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcpfang-0.1.3-py3-none-any.whl (99.6 kB view details)

Uploaded Python 3

File details

Details for the file mcpfang-0.1.3.tar.gz.

File metadata

  • Download URL: mcpfang-0.1.3.tar.gz
  • Upload date:
  • Size: 665.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for mcpfang-0.1.3.tar.gz
Algorithm Hash digest
SHA256 6fe74d2b251b28f38dc33372995d04df09fcf4d9b2bd2df6ec5c8c43e90d782f
MD5 3cb1886726a103271a2d75cbb306268f
BLAKE2b-256 c8e44caa48a054735332e41d5e85edc9b8d780b7c6da2bb8ed3093d02644f5a8

See more details on using hashes here.

Provenance

The following attestation bundles were made for mcpfang-0.1.3.tar.gz:

Publisher: publish.yml on Soydasm/mcpfang

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mcpfang-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: mcpfang-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 99.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for mcpfang-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 0e5e28a54d97dbb43e85f4d13e48c5ba71d493e50bf2a09f93d5272f697e3d97
MD5 4379002e275edde1f2ee2650c61c378d
BLAKE2b-256 4cc83a2bf8204a948cdae574ae33ca334d9c1d5e80524b893c40d98eccad996e

See more details on using hashes here.

Provenance

The following attestation bundles were made for mcpfang-0.1.3-py3-none-any.whl:

Publisher: publish.yml on Soydasm/mcpfang

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page