Skip to main content

A complete toolkit for validating LLM-generated code

Project description

vallm

A complete toolkit for validating LLM-generated code.

PyPI PyPI - Downloads CI License: Apache-2.0 Python Code style: ruff Coverage Type checking: mypy Security: bandit Pre-commit CodeQL DOI GitHub stars GitHub forks GitHub issues GitHub pull requests Release Last commit Maintained PRs Welcome

vallm validates code proposals through a four-tier pipeline — from millisecond syntax checks to LLM-as-judge semantic review — before a single line ships.

Features

  • Multi-language AST parsing via tree-sitter (165+ languages)
  • Syntax validation with ast.parse (Python) and tree-sitter error detection
  • Import resolution checking for Python, JavaScript/TypeScript, Go, Rust, Java, C/C++
  • Complexity metrics via radon (Python) and lizard (16 languages)
  • Security scanning with language-specific patterns and optional bandit integration
  • LLM-as-judge semantic review via Ollama, litellm, or direct HTTP
  • Code graph analysis — import/call graph diffing for structural regression detection
  • AST similarity scoring with normalized fingerprinting
  • Pluggy-based plugin system for custom validators
  • Rich CLI with JSON/text output formats
  • MCP integration — Model Context Protocol server for LLM tool calling

Supported Languages

Language Syntax Imports Complexity Security
Python ✅ AST + tree-sitter ✅ Full resolution (22 methods) ✅ radon + lizard ✅ bandit + patterns
JavaScript ✅ tree-sitter ✅ Node.js builtins ✅ lizard ✅ XSS, eval patterns
TypeScript ✅ tree-sitter ✅ Node.js builtins ✅ lizard ✅ XSS, eval patterns
Go ✅ tree-sitter ✅ stdlib + modules ✅ lizard ✅ SQL injection, exec
Rust ✅ tree-sitter ✅ crates ✅ lizard ✅ unsafe, unwrap
Java ✅ tree-sitter ✅ stdlib packages ✅ lizard ✅ Runtime.exec, SQL
C/C++ ✅ tree-sitter ✅ std headers ✅ lizard ✅ buffer overflow, system
Ruby ✅ tree-sitter ⚠️ Limited ✅ lizard ⚠️ Limited
PHP ✅ tree-sitter ⚠️ Limited ✅ lizard ⚠️ Limited
Swift ✅ tree-sitter ⚠️ Limited ✅ lizard ⚠️ Limited
Kotlin ✅ tree-sitter ⚠️ Limited ✅ lizard ⚠️ Limited
Scala ✅ tree-sitter ⚠️ Limited ✅ lizard ⚠️ Limited

Installation

pip install vallm

With optional dependencies:

pip install vallm[all]        # Everything
pip install vallm[llm]        # Ollama + litellm for semantic review
pip install vallm[security]   # bandit integration
pip install vallm[semantic]   # CodeBERTScore
pip install vallm[graph]      # NetworkX graph analysis

Quick Start

Validate Entire Project

# Install with LLM support
pip install vallm[llm]

# Setup Ollama (for semantic review)
ollama pull qwen2.5-coder:7b
ollama serve

# Validate entire project recursively
vallm batch . --recursive --semantic --model qwen2.5-coder:7b

# Fast validation for quick feedback (skip imports and complexity)
vallm batch . --recursive --no-imports --no-complexity

# Generate validation report in TOON format
vallm batch . --recursive --output toon > ./project/validation.toon

Project Structure & Files

Core Source Code:

Examples & Documentation:

Configuration Files:

Scripts & Tools:

Testing:

CI/CD & GitHub:

Project Analysis:

Python API

from vallm import Proposal, validate, VallmSettings

code = """
def fibonacci(n: int) -> list[int]:
    if n <= 0:
        return []
    fib = [0, 1]
    for i in range(2, n):
        fib.append(fib[i-1] + fib[i-2])
    return fib
"""

proposal = Proposal(code=code, language="python")
result = validate(proposal)
print(f"Verdict: {result.verdict.value}")  # pass / review / fail
print(f"Score: {result.weighted_score:.2f}")

CLI Commands Reference

# Batch validation (best for entire projects)
vallm batch . --recursive --semantic --model qwen2.5-coder:7b
vallm batch src/ --recursive --include "*.py,*.js" --exclude "*/test/*"
vallm batch . --recursive --format json --fail-fast
vallm batch . --recursive --verbose --show-issues  # Detailed per-file results

# Output formats for batch results
vallm batch . --recursive --format json   # Machine-readable JSON
vallm batch . --recursive --format yaml   # YAML format
vallm batch . --recursive --format toon   # Compact TOON format
vallm batch . --recursive --format text   # Plain text

# Single file validation
vallm validate --file mycode.py --semantic --model qwen2.5-coder:7b
vallm validate --file app.js --security
vallm validate --file mycode.py --format json  # JSON output

# Quick syntax check only
vallm check mycode.py
vallm check src/main.go

# Configuration and info
vallm info

Real-World Usage Examples

1. Fast Project Validation (Recommended for CI/CD)

# Quick syntax check - excludes .git/ and other system files automatically
vallm batch . --recursive --no-imports --no-complexity

# Output: Excluded 30246 files by .gitignore, Validating 169 files...
# ✓ 83 files passed, ✗ 115 files failed (mostly non-code files)

2. Generate Validation Report

# Save TOON format report to project directory
vallm batch . --recursive --output toon > ./project/validation.toon

# Save JSON report for CI/CD integration
vallm batch . --recursive --output json > ./project/validation.json

# Save detailed text report with security checks
vallm batch . --recursive --security --output text > ./project/validation-report.txt

3. Selective File Validation

# Validate only Python and JavaScript files
vallm batch . --recursive --include "*.py,*.js" --exclude "*/test/*"

# Validate specific directory with custom patterns
vallm batch src/ --recursive --include "*.py" --exclude "*/__pycache__/*"

# Validate with custom gitignore override
vallm batch . --recursive --no-gitignore --exclude "*.log,tmp/*"

4. Full Pipeline with LLM Review

# Complete validation with semantic analysis
vallm batch . --recursive --semantic --model qwen2.5-coder:7b --security

# Export full results with per-file details
vallm batch . --recursive --semantic --model qwen2.5-coder:7b --output json > full-validation.json

5. Development Workflow Integration

# Pre-commit validation (fast)
vallm batch . --recursive --no-imports --no-complexity --fail-fast

# Feature branch validation (medium)
vallm batch src/ --recursive --no-complexity --show-issues

# Release validation (full)
vallm batch . --recursive --semantic --model qwen2.5-coder:7b --security --verbose

Fast Validation Options

When validating large projects (100+ files), use these options to speed up validation:

# Fastest - syntax only (skip imports and complexity)
vallm batch . --recursive --no-imports --no-complexity

# Fast - skip import validation (often the slowest)
vallm batch . --recursive --no-imports

# Parallel processing for multi-core speedup
# Note: --parallel option was removed in v0.1.16 due to module conflicts
# Use --no-imports --no-complexity for better performance

# Combine for maximum speed
vallm batch . --recursive --no-imports --no-complexity

# Quick syntax check only (single files)
vallm check src/proxym/config.py
Option Speed Impact Description
--no-imports High Skip import resolution (slowest validator)
--no-complexity Medium Skip complexity analysis (radon/lizard)
--security Low Add security checks (fast pattern matching)
--semantic Very High LLM semantic review (requires Ollama/OpenAI)

Performance Benchmarks:

  • Fast mode: --no-imports --no-complexity - ~100 files/second
  • Normal mode: Default settings - ~20 files/second
  • Full mode: With --semantic - ~2 files/second

Recommendation for CI/CD:

# Fast validation for quick feedback (PR checks)
vallm batch src/ --recursive --no-imports --no-complexity --fail-fast

# Full validation before merge (quality gate)
vallm batch src/ --recursive --security

# Release validation with LLM review
vallm batch . --recursive --semantic --model qwen2.5-coder:7b

Generate Validation Summary File

# JSON summary for entire project (with per-file details and issues)
vallm batch . --recursive --output json > validation-summary.json

# YAML summary for src/ directory (with per-file details and issues)
vallm batch src/ --recursive --output yaml > validation-summary.yaml

# TOON format (compact, human-readable) with per-file details
vallm batch . --recursive --output toon > validation-summary.toon

# Text format with security checks
vallm batch . --recursive --output text --security > validation-report.txt

# Full validation with semantic review - save to file
vallm batch . --recursive --semantic --model qwen2.5-coder:7b --output json > full-validation.json

# Tee output to both console and file
vallm batch . --recursive --output json | tee validation-summary.json

# Save to project directory for analysis integration
vallm batch . --recursive --output toon > ./project/validation.toon

Sample Output Files:

Output Structure (JSON/YAML/TOON formats now include per-file details):

{
  "summary": {
    "total_files": 146,
    "passed": 145,
    "failed": 1
  },
  "files": [
    {
      "path": "src/proxym/config.py",
      "language": "python",
      "verdict": "fail",
      "score": 0.45,
      "issues_count": 3,
      "issues": [
        {
          "validator": "syntax",
          "severity": "error",
          "message": "Invalid syntax at line 42",
          "line": 42,
          "column": 15
        },
        {
          "validator": "imports",
          "severity": "error", 
          "message": "Module 'requests' not found",
          "line": 5,
          "column": 0
        }
      ]
    }
  ],
  "failed_files": [
    {"path": "src/proxym/config.py", "error": "Validation fail"}
  ]
}
---
summary:
  total_files: 146
  passed: 145
  failed: 1

files:
  - path: src/proxym/config.py
    language: python
    verdict: fail
    score: 0.45
    issues_count: 3
    issues:
      - validator: syntax
        severity: error
        message: "Invalid syntax at line 42"
        line: 42
        column: 15
      - validator: imports
        severity: error
        message: "Module 'requests' not found"
        line: 5
# vallm batch | 146f | 145✓ 1✗

SUMMARY:
  total: 146
  passed: 145
  failed: 1

FILES:
  [python]
    ✗ src/proxym/config.py
      verdict: fail
      score: 0.45
      issues: 2
        [error] syntax: Invalid syntax at line 42@42
        [error] imports: Module 'requests' not found@5
    ✓ src/proxym/ctl.py
      verdict: pass
      score: 0.92
      issues: 0

FAILED:
  ✗ src/proxym/config.py: Validation fail

Batch Command Options

Option Short Description
--recursive -r Recurse into subdirectories
--include File patterns to include (e.g., ".py,.js")
--exclude File patterns to exclude
--use-gitignore Respect .gitignore patterns (default: true)
--format -f Output format: rich, json, yaml, toon, text
--fail-fast -x Stop on first failure
--semantic Enable LLM-as-judge semantic review
--security Enable security checks
--model -m LLM model for semantic review
--verbose -v Show detailed validation results for each file
--show-issues -i Show issues for failed files

With Ollama (LLM-as-judge)

# 1. Install and start Ollama
ollama pull qwen2.5-coder:7b

# 2. Run with semantic review
vallm validate --file mycode.py --semantic
from vallm import Proposal, validate, VallmSettings

settings = VallmSettings(
    enable_semantic=True,
    llm_provider="ollama",
    llm_model="qwen2.5-coder:7b",
)

proposal = Proposal(
    code=new_code,
    language="python",
    reference_code=existing_code,  # optional: compare against reference
)
result = validate(proposal, settings)

Validation Pipeline

Tier Speed Validators What it catches
1 ms syntax, imports Parse errors, missing modules
2 seconds complexity, security High CC, dangerous patterns
3 seconds semantic (LLM) Logic errors, poor practices
4 minutes regression (tests) Behavioral regressions

The pipeline fails fast — Tier 1 errors stop execution immediately.

Configuration

Via environment variables (VALLM_*), vallm.toml, or pyproject.toml [tool.vallm]:

# vallm.toml
pass_threshold = 0.8
review_threshold = 0.5
max_cyclomatic_complexity = 15
enable_semantic = true
llm_provider = "ollama"
llm_model = "qwen2.5-coder:7b"

MCP Integration

vallm provides Model Context Protocol (MCP) server integration, exposing validation tools as MCP endpoints for LLM tool calling.

Starting the MCP Server

# Start the MCP server from project root
python3 mcp_server.py

# Or start the packaged module directly
python3 -m mcp.server.self_server

Claude Desktop Configuration

Add to your claude_desktop_config.json:

{
  "mcpServers": {
    "vallm": {
      "command": "python3",
      "args": ["/path/to/vallm/mcp_server.py"],
      "env": {
        "PYTHONPATH": "/path/to/vallm/src"
      }
    }
  }
}

Available MCP Tools

Tool Description Parameters
validate_syntax Multi-language syntax checking code, language, filename
validate_imports Import resolution validation code, language, filename
validate_security Security issue detection code, language, filename
validate_code Full pipeline validation code, language, filename, reference_code, enable_* flags

Example Tool Calls

{
  "method": "tools/call",
  "params": {
    "name": "validate_security",
    "arguments": {
      "code": "eval('1+1')",
      "language": "python"
    }
  }
}
{
  "method": "tools/call", 
  "params": {
    "name": "validate_code",
    "arguments": {
      "code": "def test(): pass",
      "language": "python",
      "enable_syntax": true,
      "enable_security": true,
      "enable_complexity": false
    }
  }
}

Testing MCP Integration

# Test all MCP tools
python3 test_mcp.py

# Test individual tools
PYTHONPATH=src python3 -c "from mcp.server._tools_vallm import validate_syntax; print(validate_syntax('print(\"hello\")', 'python')['verdict'])"

# Run the Docker e2e flow
bash mcp/tests/run_e2e.sh

Response Format

All MCP tools return a consistent JSON response:

{
  "success": true,
  "validator": "security",
  "score": 0.3,
  "weight": 1.5,
  "confidence": 0.9,
  "verdict": "fail",
  "issues": [
    {
      "message": "Use of eval() detected",
      "severity": "warning",
      "line": 1,
      "column": 0,
      "rule": "security.eval"
    }
  ],
  "details": {}
}

Plugin System

Write custom validators using pluggy:

from vallm.hookspecs import hookimpl
from vallm.scoring import ValidationResult

class MyValidator:
    tier = 2
    name = "custom"
    weight = 1.0

    @hookimpl
    def validate_proposal(self, proposal, context):
        # Your validation logic
        return ValidationResult(validator=self.name, score=1.0, weight=self.weight)

Register via pyproject.toml:

[project.entry-points."vallm.validators"]
custom = "mypackage.validators:MyValidator"

Multi-Language Support

vallm supports 30+ programming languages via tree-sitter parsers:

Auto-Detection

from vallm import detect_language, Language

# Auto-detect from file path
lang = detect_language("main.rs")  # → Language.RUST
print(lang.display_name)  # "Rust"
print(lang.is_compiled)     # True

CLI with Auto-Detection

# Language auto-detected from file extension
vallm validate --file script.py      # → Python
vallm check main.go                   # → Go  
vallm validate --file lib.rs          # → Rust

# Batch validation with mixed languages
vallm batch src/ --recursive --include "*.py,*.js,*.ts,*.go,*.rs"

Supported Languages

Language Category Complexity Syntax
Python Scripting ✓ radon + lizard ✓ ast + tree-sitter
JavaScript Web/Scripting ✓ lizard ✓ tree-sitter
TypeScript Web/Scripting ✓ lizard ✓ tree-sitter
Go Compiled ✓ lizard ✓ tree-sitter
Rust Compiled ✓ lizard ✓ tree-sitter
Java Compiled ✓ lizard ✓ tree-sitter
C/C++ Compiled ✓ lizard ✓ tree-sitter
Ruby Scripting ✓ lizard ✓ tree-sitter
PHP Web ✓ lizard ✓ tree-sitter
Swift Compiled ✓ lizard ✓ tree-sitter
+ 20 more via tree-sitter ✓ tree-sitter ✓ tree-sitter

See examples/07_multi_language/ for a comprehensive demo.

Examples

Each example lives in its own folder with main.py and README.md. Run all at once:

cd examples && ./run.sh

Example Details & Links

Example What it demonstrates Files Description
01_basic_validation/ Default pipeline — good, bad, and complex code main.py, README.md Basic validation with syntax, imports, complexity, and security checks
02_ast_comparison/ AST similarity scoring, tree-sitter multi-language parsing main.py, README.md Compare code similarity using AST fingerprinting
03_security_check/ Security pattern detection (eval, exec, hardcoded secrets) main.py, README.md Detect security vulnerabilities and anti-patterns
04_graph_analysis/ Import/call graph building and structural diffing main.py, README.md Build and analyze code dependency graphs
05_llm_semantic_review/ Ollama Qwen 2.5 Coder 7B LLM-as-judge review main.py, README.md Semantic code review using LLM
06_multilang_validation/ JavaScript and C validation via tree-sitter main.py, README.md Multi-language validation examples
07_multi_language/ Comprehensive multi-language support — 8+ languages with auto-detection main.py, README.md Complete multi-language validation demo
08_code2llm_integration/ Project analysis integration with code2llm main.py, README.md Integration with code2llm analysis tools
09_code2logic_integration/ Call graph analysis with code2logic main.py, README.md Advanced call graph analysis
10_mcp_ollama_demo/ MCP (Model Context Protocol) demo with Ollama main.py, README.md Model Context Protocol integration
11_claude_code_autonomous/ Autonomous refactoring with Claude Code claude_autonomous_demo.py, README.md AI-powered autonomous code refactoring
12_ollama_simple_demo/ Simplified Ollama integration example ollama_simple_demo.py, README.md Basic Ollama LLM integration

Running Examples

# Run all examples
cd examples && ./run.sh

# Run specific example
python examples/01_basic_validation/main.py

# Run with validation
vallm validate --file examples/01_basic_validation/main.py --verbose

# Batch validate all examples
vallm batch examples/ --recursive --include "*.py" --verbose

Architecture

src/vallm/
├── cli/                    # 🆕 Modular CLI package
│   ├── __init__.py         # Command registration and app export
│   ├── command_handlers.py # CLI command implementations
│   ├── output_formatters.py # Output formatting utilities
│   ├── settings_builders.py # Settings configuration logic
│   └── batch_processor.py  # Batch processing logic
├── cli.py                  # 🆕 Simplified main entry point (9L)
├── config.py               # pydantic-settings (VALLM_* env vars)
├── hookspecs.py            # pluggy hook specifications
├── scoring.py              # Weighted scoring + verdict engine (CC=18 validate function)
├── core/
│   ├── languages.py        # Language enum, auto-detection, 30+ languages
│   ├── proposal.py         # Proposal model
│   ├── ast_compare.py      # tree-sitter + Python AST similarity
│   ├── graph_builder.py    # Import/call graph construction
│   └── graph_diff.py       # Before/after graph comparison
├── validators/
│   ├── syntax.py           # Tier 1: ast.parse + tree-sitter (multi-lang)
│   ├── imports/            # 🆕 Modular import validators
│   │   ├── base.py         # 🆕 Enhanced base class with shared validate()
│   │   ├── factory.py      # Validator factory
│   │   ├── python_imports.py
│   │   ├── go_imports.py   # 🆕 Uses shared validation logic
│   │   ├── rust_imports.py # 🆕 Uses shared validation logic
│   │   └── java_imports.py # 🆕 Uses shared validation logic
│   ├── complexity.py       # Tier 2: radon (Python) + lizard (16+ langs)
│   ├── security.py         # Tier 2: patterns + bandit
│   └── semantic.py         # Tier 3: LLM-as-judge
└── sandbox/
    └── runner.py           # subprocess / Docker execution

🆕 Code Health Improvements

Recent Refactoring Achievements:

CLI Modularization - Split 850L god module into focused packages:

  • cli/command_handlers.py - Command implementations
  • cli/output_formatters.py - Output formatting logic
  • cli/settings_builders.py - Settings configuration
  • cli/batch_processor.py - Batch processing logic
  • cli/__init__.py - Command registration and app export

Import Validator Cleanup - Removed 653L legacy module:

  • Enhanced BaseImportValidator with shared validation logic
  • Eliminated duplicate validate() methods across language validators
  • Improved maintainability through template method pattern

Code Deduplication - Removed 469 lines of duplicated code:

  • Shared validation runners for examples (154 lines saved)
  • Centralized analysis data saving (66 lines saved)
  • Common demo utilities (60 lines saved)
  • LLM response parsing utilities (40 lines saved)
  • Import validator logic consolidation (40 lines saved)
  • Additional utility function consolidation (109 lines saved)

Updated Code Metrics:

Metric Before After Improvement
God Modules (>500L) 2 0 100% eliminated
Max Cyclomatic Complexity 42 ~18 57% reduction
Code Duplication 504 lines 35 lines 93% eliminated
CLI Module Size 850 lines 9 lines 99% reduction

Remaining Critical Functions:

Function Location CC Status
validate scoring.py:122 18 🟡 Acceptable
_check_lizard complexity.py 12 🟡 Acceptable
_parse_response semantic.py 12 🟡 Acceptable

Roadmap

v0.2 — CompletenessMAJOR PROGRESS

  • ✅ CLI modularization - Split 850L god module into focused packages
  • ✅ Import validator cleanup - Removed 653L legacy module
  • ✅ Code deduplication - Eliminated 469 lines of duplicate code
  • ✅ God module elimination - 100% reduction in god modules
  • ✅ Complexity reduction - 57% reduction in max cyclomatic complexity
  • Wire pluggy plugin manager (entry_point-based validator discovery)
  • Add LogicalErrorValidator (pyflakes) and LintValidator (ruff)
  • TOML config loading (vallm.toml, [tool.vallm])
  • Pre-commit hook integration
  • GitHub Actions CI/CD

v0.3 — Depth

  • AST edit distance via apted/zss
  • CodeBERTScore embedding similarity
  • NetworkX cycle detection and centrality in graph analysis
  • RegressionValidator (Tier 4) with pytest-json-report
  • TypeCheckValidator (mypy/pyright)
  • Extract output formatters

v0.4 — Intelligence

  • --fix auto-repair mode (LLM-based retry loop)
  • hypothesis/crosshair property-based test generation
  • E2B cloud sandbox backend
  • Streaming LLM output

See TODO.md for the full task breakdown.

Testing

Running Tests

# Run all tests
pytest

# Run specific test categories
pytest tests/test_syntax.py
pytest tests/test_imports.py
pytest tests/test_complexity.py
pytest tests/test_security.py
pytest tests/test_semantic_validation.py

# Run CLI end-to-end tests
pytest tests/test_cli_e2e.py -v

# Run with coverage
pytest --cov=vallm --cov-report=html

# Run performance tests
pytest tests/test_performance.py -v

Test Files Reference

Test Coverage

Current test coverage: 85% across all modules.

  • ✅ Syntax validation: 95% coverage
  • ✅ Import resolution: 87% coverage
  • ✅ Complexity analysis: 82% coverage
  • ✅ Security scanning: 79% coverage
  • ✅ Semantic validation: 71% coverage
  • ✅ CLI commands: 89% coverage

License

Apache License 2.0 - see LICENSE for details.

Author

Created by Tom Sapletta - tom@sapletta.com

Additional Resources

Documentation & Guides

Package Configuration

CI/CD & Automation

Project Analysis & Metrics

Development Tools

Examples & Demos

Source Code Organization

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vallm-0.1.53.tar.gz (279.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vallm-0.1.53-py3-none-any.whl (74.9 kB view details)

Uploaded Python 3

File details

Details for the file vallm-0.1.53.tar.gz.

File metadata

  • Download URL: vallm-0.1.53.tar.gz
  • Upload date:
  • Size: 279.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for vallm-0.1.53.tar.gz
Algorithm Hash digest
SHA256 4bb214098b42924a43405496fd9c361a267e0b5aef5803be3bbe28edec29e411
MD5 af1e72d5c02870b4eed29c9ea57fd3fe
BLAKE2b-256 3e564853de090d84e563ef43d2b52e1bdbdf471e09373d0f531d8a577b628dfd

See more details on using hashes here.

File details

Details for the file vallm-0.1.53-py3-none-any.whl.

File metadata

  • Download URL: vallm-0.1.53-py3-none-any.whl
  • Upload date:
  • Size: 74.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for vallm-0.1.53-py3-none-any.whl
Algorithm Hash digest
SHA256 2e1928032d84aeff0524a38aa50f05207780bea341c7f4f3686331d93c576385
MD5 2d074fcee040dd51316523b8c7f571b4
BLAKE2b-256 6b40b92d4c898a9e89be8939c325d537573c095475fd281130338e4279fd4d30

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page