AI Dependency Intelligence — scan, govern, and visualize AI in your code.
Project description
ScanLLM
Know every AI dependency. Enforce every policy.
Website · Docs · Quick Start · Issues
ScanLLM scans your codebase to discover every AI/LLM dependency — SDK imports, model references, API keys, agent configs, vector databases — and maps them into an interactive dependency graph with risk scores and OWASP LLM Top 10 coverage.
Think "Snyk for AI systems." No six-figure contract required.
Quick Start
pip install scanllm
scanllm scan .
scanllm ui # launch local dashboard
That's it. Three commands, zero config.
What It Finds
- 30+ AI providers — OpenAI, Anthropic, Google, Cohere, Mistral, AWS Bedrock, Azure OpenAI, and more
- 200+ detection patterns — imports, SDK calls, model parameters, config references, env vars
- OWASP LLM Top 10 — prompt injection, excessive agency, supply chain, sensitive data disclosure
- Hardcoded secrets — AI API keys across 30+ providers (
OPENAI_API_KEY,ANTHROPIC_API_KEY, ...) - Agent risks — overprivileged tools, missing human-in-the-loop, broad MCP server configs
Features
| Feature | Description |
|---|---|
| AI Dependency Discovery | AST-level Python analysis, JS/TS scanning, config/dependency/notebook/secret detection across 7 specialized scanners |
| Interactive Dependency Graph | Visualize how LLM providers, vector DBs, frameworks, and agents connect — powered by React Flow |
| Risk Scoring | 0-100 score with A-F grades based on secrets, OWASP findings, provider concentration, and safety gaps |
| Policy as Code | YAML rules that run in CI/CD — exit code 1 means the build fails |
| AI-BOM | CycloneDX 1.6 ML-BOM export for SOC 2, EU AI Act, and NIST AI RMF compliance |
| Drift Detection | Compare scans over time to catch new AI dependencies before they reach production |
| Auto-Fix Suggestions | Remediation guidance for every finding |
| Local Dashboard | scanllm ui launches an interactive web UI on localhost |
| SARIF Output | Upload results to GitHub Code Scanning |
CLI Commands
scanllm scan <path> Scan a repo or directory for AI dependencies
scanllm init Initialize a .scanllm.yaml config in your project
scanllm policy check Validate against policy rules (CI/CD gate)
scanllm diff <a> <b> Compare two scan results for drift detection
scanllm report pdf <scan> Generate PDF executive report
scanllm report aibom <scan> Export CycloneDX AI-BOM (JSON)
scanllm ui Launch local web dashboard
scanllm watch Watch for file changes and re-scan automatically
scanllm fix Show auto-fix suggestions for findings
Scan Options
scanllm scan . --output json # JSON output
scanllm scan . --output sarif # SARIF for GitHub Code Scanning
scanllm scan . --output cyclonedx # CycloneDX AI-BOM
scanllm scan . --severity high # Only high+ severity findings
scanllm scan . --full-scan # Include all file types
Why Not Snyk / Cycode / Promptfoo?
| ScanLLM | Cycode / Noma | Snyk | Promptfoo | |
|---|---|---|---|---|
| Code-level AI discovery | Yes | Partial | Limited | No |
| Interactive dependency graph | Yes | No | No | No |
| OWASP LLM Top 10 mapping | Yes | Yes | No | Partial |
| AI-BOM (CycloneDX) | Yes | No | No | No |
| Open source CLI | Yes | No | No | Yes |
| Zero-config setup | Yes | No | No | Yes |
| Price | Free / Team tier | $50K+/yr | $25K+/yr | Free |
Snyk found only 10/26 LLM-specific vulnerabilities in independent testing. Cycode and Noma require six-figure contracts. Promptfoo tests prompts but does not scan codebases. ScanLLM does code-level discovery, dependency graphing, and OWASP mapping in one tool.
CI/CD Integration
GitHub Actions
name: ScanLLM AI Security
on:
push:
branches: [main]
pull_request:
branches: [main]
jobs:
scanllm:
runs-on: ubuntu-latest
permissions:
security-events: write
pull-requests: write
steps:
- uses: actions/checkout@v4
- uses: isunilsharma/scanllm@v2
with:
severity_threshold: medium
fail_on_policy_violation: true
upload_sarif: true
Pre-commit
# .pre-commit-config.yaml
repos:
- repo: https://github.com/isunilsharma/scanllm
rev: v2.0.0
hooks:
- id: scanllm-policy-check
- id: scanllm-secret-check
OWASP LLM Top 10 Coverage
| ID | Vulnerability | What ScanLLM Detects |
|---|---|---|
| LLM01 | Prompt Injection | User input in f-strings/.format() passed to LLM calls |
| LLM02 | Sensitive Info Disclosure | Credentials and PII in system prompts |
| LLM03 | Supply Chain | Unverified model sources, outdated AI packages |
| LLM05 | Improper Output Handling | eval(llm_response), unsanitized output to SQL/shell |
| LLM06 | Excessive Agency | Broad agent tool access, missing human-in-the-loop |
| LLM07 | System Prompt Leakage | API keys in prompt templates |
| LLM08 | Vector/Embedding Weaknesses | Unauthenticated vector DB connections |
| LLM10 | Unbounded Consumption | Missing max_tokens, no rate limiting |
Try It on a Demo Project
git clone https://github.com/isunilsharma/scanllm.git
cd scanllm/demo/sample_project
pip install scanllm
scanllm scan .
The demo project contains intentional AI security issues to showcase detection capabilities. See demo/ for details.
For Teams (Cloud)
The free CLI covers individual and single-repo use. For teams that need organizational visibility:
- Multi-repo scanning with a unified dashboard
- Historical trend tracking and drift alerts
- Compliance reports (PDF, AI-BOM) for auditors
- Policy enforcement across all repositories
- SSO and team management
Learn more at scanllm.ai.
Self-Hosted Setup
Docker Compose
git clone https://github.com/isunilsharma/scanllm.git
cd scanllm
docker compose up --build
- Frontend: http://localhost:3000
- Backend API: http://localhost:8001
- API Docs: http://localhost:8001/docs
Manual
# Backend
cd backend && pip install -r requirements.txt
uvicorn app.main:app --reload --port 8001
# Frontend
cd frontend && npm install --legacy-peer-deps && npm start
Contributing
Contributions are welcome! See CONTRIBUTING.md for setup instructions, coding standards, and PR guidelines.
Highest-impact contributions: adding AI detection patterns to config/ai_signatures.yaml. Every new AI framework, model, or tool you add helps the entire community.
License
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file scanllm-2.3.2.tar.gz.
File metadata
- Download URL: scanllm-2.3.2.tar.gz
- Upload date:
- Size: 119.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
626bb4375de80d918eac87bf19c70706c3de647bfee736efab5ecdf17c53b7d3
|
|
| MD5 |
4d71789df456e2b7624a98ef31400cdc
|
|
| BLAKE2b-256 |
eb7bbc81558e4a8999bb8250e6bb3c56efba1653d934b0d1daba739178d04a61
|
File details
Details for the file scanllm-2.3.2-py3-none-any.whl.
File metadata
- Download URL: scanllm-2.3.2-py3-none-any.whl
- Upload date:
- Size: 143.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b004c73bf5600e219c7a103f5b7753a559319eb8a0005b4d76bcdabdf9a93341
|
|
| MD5 |
5fcabcac312ecc68557dc51927b91097
|
|
| BLAKE2b-256 |
26268f8b3c3291fcb35d69d07082d69c95c22a93911abe706405b8287bb9dcc2
|