A complete toolkit for validating LLM-generated code
Project description
vallm
A complete toolkit for validating LLM-generated code.
vallm validates code proposals through a four-tier pipeline — from millisecond syntax checks to LLM-as-judge semantic review — before a single line ships.
Features
- Multi-language AST parsing via tree-sitter (165+ languages)
- Syntax validation with ast.parse and tree-sitter error detection
- Import resolution checking for Python
- Complexity metrics via radon (Python) and lizard (16 languages)
- Security scanning with pattern matching and optional bandit integration
- LLM-as-judge semantic review via Ollama, litellm, or direct HTTP
- Code graph analysis — import/call graph diffing for structural regression detection
- AST similarity scoring with normalized fingerprinting
- Pluggy-based plugin system for custom validators
- Rich CLI with JSON/text output formats
Installation
pip install vallm
With optional dependencies:
pip install vallm[all] # Everything
pip install vallm[llm] # Ollama + litellm for semantic review
pip install vallm[security] # bandit integration
pip install vallm[semantic] # CodeBERTScore
pip install vallm[graph] # NetworkX graph analysis
Quick Start
Python API
from vallm import Proposal, validate, VallmSettings
code = """
def fibonacci(n: int) -> list[int]:
if n <= 0:
return []
fib = [0, 1]
for i in range(2, n):
fib.append(fib[i-1] + fib[i-2])
return fib
"""
proposal = Proposal(code=code, language="python")
result = validate(proposal)
print(f"Verdict: {result.verdict.value}") # pass / review / fail
print(f"Score: {result.weighted_score:.2f}")
CLI
# Validate a file
vallm validate --file mycode.py
# Quick syntax check
vallm check mycode.py
# With LLM semantic review (requires Ollama)
vallm validate --file mycode.py --semantic --model qwen2.5-coder:7b
# JSON output
vallm validate --file mycode.py --format json
# Show config and available validators
vallm info
With Ollama (LLM-as-judge)
# 1. Install and start Ollama
ollama pull qwen2.5-coder:7b
# 2. Run with semantic review
vallm validate --file mycode.py --semantic
from vallm import Proposal, validate, VallmSettings
settings = VallmSettings(
enable_semantic=True,
llm_provider="ollama",
llm_model="qwen2.5-coder:7b",
)
proposal = Proposal(
code=new_code,
language="python",
reference_code=existing_code, # optional: compare against reference
)
result = validate(proposal, settings)
Validation Pipeline
| Tier | Speed | Validators | What it catches |
|---|---|---|---|
| 1 | ms | syntax, imports | Parse errors, missing modules |
| 2 | seconds | complexity, security | High CC, dangerous patterns |
| 3 | seconds | semantic (LLM) | Logic errors, poor practices |
| 4 | minutes | regression (tests) | Behavioral regressions |
The pipeline fails fast — Tier 1 errors stop execution immediately.
Configuration
Via environment variables (VALLM_*), vallm.toml, or pyproject.toml [tool.vallm]:
# vallm.toml
pass_threshold = 0.8
review_threshold = 0.5
max_cyclomatic_complexity = 15
enable_semantic = true
llm_provider = "ollama"
llm_model = "qwen2.5-coder:7b"
Plugin System
Write custom validators using pluggy:
from vallm.hookspecs import hookimpl
from vallm.scoring import ValidationResult
class MyValidator:
tier = 2
name = "custom"
weight = 1.0
@hookimpl
def validate_proposal(self, proposal, context):
# Your validation logic
return ValidationResult(validator=self.name, score=1.0, weight=self.weight)
Register via pyproject.toml:
[project.entry-points."vallm.validators"]
custom = "mypackage.validators:MyValidator"
Examples
See the examples/ directory:
01_basic_validation.py— Default pipeline with good, bad, and complex code02_ast_comparison.py— AST similarity and structural diff03_security_check.py— Security pattern detection04_graph_analysis.py— Import/call graph diffing05_llm_semantic_review.py— Ollama LLM-as-judge review06_multilang_validation.py— JavaScript and C validation
License
Apache License 2.0 - see LICENSE for details.
Author
Created by Tom Sapletta - tom@sapletta.com
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file vallm-0.1.4.tar.gz.
File metadata
- Download URL: vallm-0.1.4.tar.gz
- Upload date:
- Size: 43.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4c260ba5f11e775b77be189fdea926b35a0ab83b9dc6ef031cade5597c111f78
|
|
| MD5 |
8d6058011bb13a57fd8a424523a2caea
|
|
| BLAKE2b-256 |
84c0984f67fd1f5f402960a48e26306519532e9edb8b627de9c9bf621a345d12
|
File details
Details for the file vallm-0.1.4-py3-none-any.whl.
File metadata
- Download URL: vallm-0.1.4-py3-none-any.whl
- Upload date:
- Size: 30.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
75d5c18a3b396c149325a6a160e53670dd1683123e10bc5da88eb9d4a5fb1c85
|
|
| MD5 |
4cc680dc5d59ef56c47242ec3feb6c6e
|
|
| BLAKE2b-256 |
4163e72ec82472389ee54d66048a121ec3fb62252c5cd08fca163144e69cd8a4
|