Skip to main content

Prompt injection scanner CLI - substring, unicode, secrets, and ML detection

Project description

Parry

Check Mentioned in Awesome Claude Code

Prompt injection scanner for Claude Code hooks. Scans tool inputs and outputs for injection attacks, secrets, and data exfiltration attempts.

Early development — this tool is under active development and may have bugs or false positives. Tested on linux/macOS.

Prerequisites

The ML models are gated on HuggingFace. Before installing:

  1. Create an account at huggingface.co
  2. Accept the DeBERTa v3 license (required for all modes)
  3. For full mode: also accept the Llama Prompt Guard 2 license (Meta approval required)
  4. Create an access token at huggingface.co/settings/tokens

Usage

Add to ~/.claude/settings.json:

With uvx:

{
  "hooks": {
    "PreToolUse": [{ "command": "uvx parry-guard hook", "timeout": 1000 }],
    "PostToolUse": [{ "command": "uvx parry-guard hook", "timeout": 5000 }],
    "UserPromptSubmit": [{ "command": "uvx parry-guard hook", "timeout": 2000 }]
  }
}

With rvx:

{
  "hooks": {
    "PreToolUse": [{ "command": "rvx parry-guard hook", "timeout": 1000 }],
    "PostToolUse": [{ "command": "rvx parry-guard hook", "timeout": 5000 }],
    "UserPromptSubmit": [{ "command": "rvx parry-guard hook", "timeout": 2000 }]
  }
}

With parry-guard on PATH (via Nix, cargo install, or release binary):

{
  "hooks": {
    "PreToolUse": [{ "command": "parry-guard hook", "timeout": 1000 }],
    "PostToolUse": [{ "command": "parry-guard hook", "timeout": 5000 }],
    "UserPromptSubmit": [{ "command": "parry-guard hook", "timeout": 2000 }]
  }
}
Other installation methods

From source:

# Default (ONNX backend - statically linked, 5-6x faster than Candle)
cargo install --path crates/cli

# Candle backend (pure Rust, no native deps, portable)
cargo install --path crates/cli --no-default-features --features candle

Nix (home-manager)

# flake.nix
{
  inputs.parry.url = "github:vaporif/parry";

  outputs = { parry, ... }: {
    # pass parry to your home-manager config via extraSpecialArgs, overlays, etc.
  };
}
# home-manager module
{ inputs, pkgs, config, ... }: {
  imports = [ inputs.parry.homeManagerModules.default ];

  programs.parry = {
    enable = true;
    package = inputs.parry.packages.${pkgs.system}.default;  # onnx (default)
    # package = inputs.parry.packages.${pkgs.system}.candle;  # candle (pure Rust, portable, ~5-6x slower)
    hfTokenFile = config.sops.secrets.hf-token.path;
    ignorePaths = [ "/home/user/repos/parry" ];
    # claudeMdThreshold = 0.9;  # ML threshold for CLAUDE.md scanning (default 0.9)

    # scanMode = "full";  # fast (default) | full | custom

    # Custom models (auto-sets scanMode to "custom")
    # models = [
    #   { repo = "ProtectAI/deberta-v3-small-prompt-injection-v2"; }
    #   { repo = "meta-llama/Llama-Prompt-Guard-2-86M"; threshold = 0.5; }
    # ];
  };
}

Setup

1. Configure HuggingFace token

One of (first match wins):

export HF_TOKEN="hf_..."                          # direct value
export HF_TOKEN_PATH="/path/to/token"              # file path
# or place token at /run/secrets/hf-token-scan-injection

The daemon auto-starts on first scan, downloads the model on first run, and idles out after 30 minutes.

Note (non-Nix users): The Nix home-manager module wraps the binary with all config baked in via env vars. Without Nix, set env vars in your shell profile (e.g. HF_TOKEN, PARRY_IGNORE_PATHS, PARRY_SCAN_MODE) — the hook command inherits them. Alternatively, pass flags directly in the hook command: parry-guard --hf-token-path ~/.hf-token --ignore-path /home/user/safe hook. See Config for all options.

What each hook does

  • PreToolUse: 5-layer security — taint enforcement, CLAUDE.md scanning, exfil blocking, sensitive path blocking, input content injection scanning (Write/Edit/Bash/MCP tools)
  • PostToolUse: Scans tool output for injection/secrets, auto-taints project on detection
  • UserPromptSubmit: Audits .claude/ directory for dangerous permissions, injected commands, hook scripts

Daemon & Cache

The daemon keeps ML models in memory and can be run standalone with parry-guard serve --idle-timeout 1800. Hook calls auto-start it if not running.

Scan results are cached in ~/.parry-guard/scan-cache.redb (30-day TTL, ~8ms cache hits vs ~70ms+ inference). Cache is shared across projects and pruned hourly.

Detection Layers

Multi-stage, fail-closed (if unsure, treat as unsafe):

  1. Unicode — invisible characters (PUA, unassigned codepoints), homoglyphs, RTL overrides
  2. Substring — Aho-Corasick matching for known injection phrases
  3. Secrets — 40+ regex patterns for credentials (AWS, GitHub/GitLab, cloud providers, database URIs, private keys, etc.)
  4. ML Classification — DeBERTa v3 transformer with text chunking (256 chars, 25 overlap) and head+tail strategy for long texts. Configurable threshold (default 0.7).
  5. Bash Exfiltration — tree-sitter AST analysis for data exfil: network sinks, command substitution, obfuscation (base64, hex, ROT13), DNS tunneling, cloud storage, 60+ sensitive paths, 40+ exfil domains
  6. Script Exfiltration — same source→sink analysis for script files across 16 languages

Scan modes

Mode Models Latency/chunk Backend
fast (default) DeBERTa v3 ~50-70ms any
full DeBERTa v3 + Llama Prompt Guard 2 ~1.5s candle only
custom User-defined (~/.config/parry-guard/models.toml) varies any

Use fast for interactive workflows; full for high-security or batch scanning (parry-guard diff --full). The two models cover different blind spots — DeBERTa v3 catches common injection patterns while Llama Prompt Guard 2 is better at subtle, context-dependent attacks (role-play jailbreaks, indirect injections). Running both as an OR ensemble reduces missed attacks at ~20x higher latency per chunk.

Note: full mode requires the candle backend — Llama Prompt Guard 2 does not ship an ONNX export. Build with --features candle --no-default-features to use full mode.

Config

Global flags

Flag Env Default Description
--threshold PARRY_THRESHOLD 0.7 ML detection threshold (0.0–1.0)
--claude-md-threshold PARRY_CLAUDE_MD_THRESHOLD 0.9 ML threshold for CLAUDE.md scanning (0.0–1.0)
--scan-mode PARRY_SCAN_MODE fast ML scan mode: fast, full, custom
--hf-token HF_TOKEN HuggingFace token (direct value)
--hf-token-path HF_TOKEN_PATH /run/secrets/hf-token-scan-injection HuggingFace token file
--ignore-path PARRY_IGNORE_PATHS Paths to skip scanning (comma-separated / repeatable)

Subcommand flags

Flag Env Default Description
serve --idle-timeout PARRY_IDLE_TIMEOUT 1800 Daemon idle timeout in seconds
diff --full false Use ML scan instead of fast-only
diff -e, --extensions Filter by file extension (comma-separated)

Env-only

Env Default Description
PARRY_LOG warn Tracing filter (trace, debug, info, warn, error)
PARRY_LOG_FILE ~/.parry-guard/parry-guard.log Override log file path

Custom patterns: ~/.config/parry-guard/patterns.toml (add/remove sensitive paths, exfil domains, secret patterns). Custom models: ~/.config/parry-guard/models.toml (used with --scan-mode custom, see examples/models.toml).

ML Backends

One backend is always required (enforced at compile time). Nix default is ONNX (x86_64-linux, aarch64-linux, aarch64-darwin). Use candle package on other platforms.

Feature Description
onnx-fetch ONNX, statically linked (downloads ORT at build time). Default.
candle Pure Rust ML. Portable, no native deps. ~5-6x slower.
onnx ONNX, you provide ORT_DYLIB_PATH.
onnx-coreml (experimental) ONNX with CoreML on Apple Silicon.
# Build with Candle instead of ONNX
cargo build --no-default-features --features candle

Performance

Apple Silicon, release build, fast mode (DeBERTa v3 only). Candle is 5-6x slower than ONNX (default). Run just bench-candle / just bench-onnx to reproduce (requires HF_TOKEN).

Scenario ONNX (default) Candle
Short text (1 chunk) ~10ms ~61ms
Medium text (2 chunks) ~32ms ~160ms
Long text (6 chunks) ~136ms ~683ms
Cold start (daemon + model load) ~580ms ~1s
Fast-scan short-circuit ~7ms ~7ms
Cached result ~8ms ~8ms

Llama Prompt Guard 2 does not ship an ONNX export, so full mode requires the candle backend.

Contributing

See CONTRIBUTING.md for development setup, commands, and contribution guidelines.

Credits

License

MIT

Llama Prompt Guard 2 (used in full scan mode) is licensed separately under the Llama 4 Community License. See LICENSE-LLAMA.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

parry_guard-0.1.0a1-py3-none-musllinux_1_2_x86_64.whl (8.8 MB view details)

Uploaded Python 3musllinux: musl 1.2+ x86-64

parry_guard-0.1.0a1-py3-none-musllinux_1_2_aarch64.whl (8.3 MB view details)

Uploaded Python 3musllinux: musl 1.2+ ARM64

parry_guard-0.1.0a1-py3-none-macosx_11_0_arm64.whl (8.1 MB view details)

Uploaded Python 3macOS 11.0+ ARM64

parry_guard-0.1.0a1-py3-none-macosx_10_12_x86_64.whl (8.3 MB view details)

Uploaded Python 3macOS 10.12+ x86-64

File details

Details for the file parry_guard-0.1.0a1-py3-none-musllinux_1_2_x86_64.whl.

File metadata

File hashes

Hashes for parry_guard-0.1.0a1-py3-none-musllinux_1_2_x86_64.whl
Algorithm Hash digest
SHA256 e74ba5e6b670e30b7ed92e448fccdc201b1ca3cd15cdb47250d0a7a552050fa2
MD5 4c3e8e4c73103dad84b2ace492f2dfdc
BLAKE2b-256 f72bc3e88945747b1c7f23caa103469ab262c04714246948cbc50844e9f9b649

See more details on using hashes here.

Provenance

The following attestation bundles were made for parry_guard-0.1.0a1-py3-none-musllinux_1_2_x86_64.whl:

Publisher: release.yaml on vaporif/parry-guard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file parry_guard-0.1.0a1-py3-none-musllinux_1_2_aarch64.whl.

File metadata

File hashes

Hashes for parry_guard-0.1.0a1-py3-none-musllinux_1_2_aarch64.whl
Algorithm Hash digest
SHA256 1e6f211ee87805e849d16b751d544a638d67e735bf4434b8798866e856ee11d7
MD5 79b5b191965b6884e0da541dc5b4de6e
BLAKE2b-256 04387b3da374de31fdb423b6d9d9d2da757969b4c45a4279708c3fcae3508ad3

See more details on using hashes here.

Provenance

The following attestation bundles were made for parry_guard-0.1.0a1-py3-none-musllinux_1_2_aarch64.whl:

Publisher: release.yaml on vaporif/parry-guard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file parry_guard-0.1.0a1-py3-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for parry_guard-0.1.0a1-py3-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 68d53040f98e596bb5984b7ccb9bc4b920cbecc31f17fc2499428136b653db8d
MD5 811f83c74d894042acd02e303bf39ff4
BLAKE2b-256 e6a40da5a7b075e490eaf8bc96cecdfd2c54540f4ba531492749dc277fe25236

See more details on using hashes here.

Provenance

The following attestation bundles were made for parry_guard-0.1.0a1-py3-none-macosx_11_0_arm64.whl:

Publisher: release.yaml on vaporif/parry-guard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file parry_guard-0.1.0a1-py3-none-macosx_10_12_x86_64.whl.

File metadata

File hashes

Hashes for parry_guard-0.1.0a1-py3-none-macosx_10_12_x86_64.whl
Algorithm Hash digest
SHA256 c4a9623606b98f8b08e85afd7ba1109b77edd04b26ca5dbd3e85a9aaf4d61b34
MD5 bd7ba8f74c55b9b1cf96cdc0ee8c111c
BLAKE2b-256 0a127c31fb9b424f58ac148075b52d514088f07b38ee0b24e5d8c44ce054fe58

See more details on using hashes here.

Provenance

The following attestation bundles were made for parry_guard-0.1.0a1-py3-none-macosx_10_12_x86_64.whl:

Publisher: release.yaml on vaporif/parry-guard

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page