Skip to main content

AI Dependency Intelligence — scan, govern, and visualize AI in your code.

Project description

ScanLLM

Know every AI dependency. Enforce every policy.

PyPI version License: MIT Python 3.11+ GitHub stars CI status

Website · Docs · Quick Start · Issues


ScanLLM scans your codebase to discover every AI/LLM dependency — SDK imports, model references, API keys, agent configs, vector databases — and maps them into an interactive dependency graph with risk scores and OWASP LLM Top 10 coverage.

Think "Snyk for AI systems." No six-figure contract required.

Quick Start

pip install scanllm
scanllm scan .
scanllm ui          # launch local dashboard

That's it. Three commands, zero config.

What It Finds

  • 30+ AI providers — OpenAI, Anthropic, Google, Cohere, Mistral, AWS Bedrock, Azure OpenAI, and more
  • 200+ detection patterns — imports, SDK calls, model parameters, config references, env vars
  • OWASP LLM Top 10 — prompt injection, excessive agency, supply chain, sensitive data disclosure
  • Hardcoded secrets — AI API keys across 30+ providers (OPENAI_API_KEY, ANTHROPIC_API_KEY, ...)
  • Agent risks — overprivileged tools, missing human-in-the-loop, broad MCP server configs

Features

Feature Description
AI Dependency Discovery AST-level Python analysis, JS/TS scanning, config/dependency/notebook/secret detection across 7 specialized scanners
Interactive Dependency Graph Visualize how LLM providers, vector DBs, frameworks, and agents connect — powered by React Flow
Risk Scoring 0-100 score with A-F grades based on secrets, OWASP findings, provider concentration, and safety gaps
Policy as Code YAML rules that run in CI/CD — exit code 1 means the build fails
AI-BOM CycloneDX 1.6 ML-BOM export for SOC 2, EU AI Act, and NIST AI RMF compliance
Drift Detection Compare scans over time to catch new AI dependencies before they reach production
Auto-Fix Suggestions Remediation guidance for every finding
Local Dashboard scanllm ui launches an interactive web UI on localhost
SARIF Output Upload results to GitHub Code Scanning

CLI Commands

scanllm scan <path>          Scan a repo or directory for AI dependencies
scanllm init                 Initialize a .scanllm.yaml config in your project
scanllm policy check         Validate against policy rules (CI/CD gate)
scanllm diff <a> <b>         Compare two scan results for drift detection
scanllm report pdf <scan>    Generate PDF executive report
scanllm report aibom <scan>  Export CycloneDX AI-BOM (JSON)
scanllm ui                   Launch local web dashboard
scanllm watch                Watch for file changes and re-scan automatically
scanllm fix                  Show auto-fix suggestions for findings

Scan Options

scanllm scan . --output json           # JSON output
scanllm scan . --output sarif          # SARIF for GitHub Code Scanning
scanllm scan . --output cyclonedx      # CycloneDX AI-BOM
scanllm scan . --severity high         # Only high+ severity findings
scanllm scan . --full-scan             # Include all file types

Why Not Snyk / Cycode / Promptfoo?

ScanLLM Cycode / Noma Snyk Promptfoo
Code-level AI discovery Yes Partial Limited No
Interactive dependency graph Yes No No No
OWASP LLM Top 10 mapping Yes Yes No Partial
AI-BOM (CycloneDX) Yes No No No
Open source CLI Yes No No Yes
Zero-config setup Yes No No Yes
Price Free / Team tier $50K+/yr $25K+/yr Free

Snyk found only 10/26 LLM-specific vulnerabilities in independent testing. Cycode and Noma require six-figure contracts. Promptfoo tests prompts but does not scan codebases. ScanLLM does code-level discovery, dependency graphing, and OWASP mapping in one tool.

CI/CD Integration

GitHub Actions

name: ScanLLM AI Security
on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  scanllm:
    runs-on: ubuntu-latest
    permissions:
      security-events: write
      pull-requests: write
    steps:
      - uses: actions/checkout@v4
      - uses: isunilsharma/scanllm@v2
        with:
          severity_threshold: medium
          fail_on_policy_violation: true
          upload_sarif: true

Pre-commit

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/isunilsharma/scanllm
    rev: v2.0.0
    hooks:
      - id: scanllm-policy-check
      - id: scanllm-secret-check

OWASP LLM Top 10 Coverage

ID Vulnerability What ScanLLM Detects
LLM01 Prompt Injection User input in f-strings/.format() passed to LLM calls
LLM02 Sensitive Info Disclosure Credentials and PII in system prompts
LLM03 Supply Chain Unverified model sources, outdated AI packages
LLM05 Improper Output Handling eval(llm_response), unsanitized output to SQL/shell
LLM06 Excessive Agency Broad agent tool access, missing human-in-the-loop
LLM07 System Prompt Leakage API keys in prompt templates
LLM08 Vector/Embedding Weaknesses Unauthenticated vector DB connections
LLM10 Unbounded Consumption Missing max_tokens, no rate limiting

Try It on a Demo Project

git clone https://github.com/isunilsharma/scanllm.git
cd scanllm/demo/sample_project
pip install scanllm
scanllm scan .

The demo project contains intentional AI security issues to showcase detection capabilities. See demo/ for details.

For Teams (Cloud)

The free CLI covers individual and single-repo use. For teams that need organizational visibility:

  • Multi-repo scanning with a unified dashboard
  • Historical trend tracking and drift alerts
  • Compliance reports (PDF, AI-BOM) for auditors
  • Policy enforcement across all repositories
  • SSO and team management

Learn more at scanllm.ai.

Self-Hosted Setup

Docker Compose

git clone https://github.com/isunilsharma/scanllm.git
cd scanllm
docker compose up --build

Manual

# Backend
cd backend && pip install -r requirements.txt
uvicorn app.main:app --reload --port 8001

# Frontend
cd frontend && npm install --legacy-peer-deps && npm start

Contributing

Contributions are welcome! See CONTRIBUTING.md for setup instructions, coding standards, and PR guidelines.

Highest-impact contributions: adding AI detection patterns to config/ai_signatures.yaml. Every new AI framework, model, or tool you add helps the entire community.

License

MIT


scanllm.ai · hello@scanllm.ai · Report a Bug

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scanllm-2.1.0.tar.gz (102.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

scanllm-2.1.0-py3-none-any.whl (124.7 kB view details)

Uploaded Python 3

File details

Details for the file scanllm-2.1.0.tar.gz.

File metadata

  • Download URL: scanllm-2.1.0.tar.gz
  • Upload date:
  • Size: 102.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for scanllm-2.1.0.tar.gz
Algorithm Hash digest
SHA256 3f5d5f22d90a3fde0fefacc00ac29e30de8feef72835af427b1dfb4c2cfee580
MD5 10b1707417ede65df98e2ff462e5345d
BLAKE2b-256 ecce0f1a15ffc0f159c797d38af4fe884270adfbe66173320f5df79e3a147e13

See more details on using hashes here.

File details

Details for the file scanllm-2.1.0-py3-none-any.whl.

File metadata

  • Download URL: scanllm-2.1.0-py3-none-any.whl
  • Upload date:
  • Size: 124.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.11.15

File hashes

Hashes for scanllm-2.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 a0e41b9636361df3de6a7b93a9bc3fa1e1d501b12b5925a8f13d7272bca339c2
MD5 7e57f1045e33c92454a87eefa96074a3
BLAKE2b-256 6b9f41afbc63025ec1a56be9625189267ce49685ff447ca36fc8f87bfb9c9f53

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page