CLI guardrail for catching stale Python APIs before runtime.
Project description
llm-code-validator
Python CLI for checking dependency-heavy Python projects for stale or version-incompatible third-party API usage before commit or CI.
It parses Python files with ast, checks imports and calls against a maintained API-drift rule database, and reports issues before runtime.
Default checks are local-only. No OpenAI, Anthropic, or other LLM API key is required, and the tool does not make network calls in normal use.
Current local validation: 74 tests passing, 68 API-drift rules, and PyPI install verified.
PyPI: https://pypi.org/project/llm-code-validator/
Install
pip install llm-code-validator
For local development:
git clone https://github.com/mathew-felix/llm-code-validator
cd llm-code-validator
pip install -e ".[dev]"
Quick Use
llm-code-validator check file.py
llm-code-validator check src/
llm-code-validator check --staged
llm-code-validator check src/ --format json
llm-code-validator check src/ --format github
Exit codes:
0: no diagnostics1: diagnostics found2: tool error
Example
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
llm-code-validator check app.py
app.py:1 LCV001 warning sqlalchemy.declarative_base sqlalchemy.declarative_base is incompatible with sqlalchemy>=2.0.0
fix: from sqlalchemy.orm import declarative_base
Preview or apply safe fixes:
llm-code-validator fix app.py
llm-code-validator fix app.py --write
What It Checks
Current rule database:
- 68 API-drift rules
- 15 safe fixes
- Rules for OpenAI, Anthropic, LangChain, LangGraph, LlamaIndex, Pinecone, ChromaDB, FastAPI, Pydantic, pandas, NumPy, SQLAlchemy, Torch, and Transformers
Validate the rule database:
llm-code-validator validate-signatures
This checks source-level API migration patterns. It does not replace Ruff for linting, mypy for type checking, pip-audit for vulnerability checks, or Dependabot for dependency updates.
Security Model
By default, llm-code-validator reads local Python files, parses them with Python's built-in ast module, and compares imports and calls with the bundled rule database. It does not send source code, dependency files, environment variables, or secrets to any external service.
If optional AI-assisted review is added in the future, it should remain explicit opt-in and should minimize and redact any code snippets before a provider request.
Rule Maintenance
Public rules are reviewed before release. New rules should be added to data/library_signatures.json, backed by official evidence such as migration guides, release notes, official docs, or maintainer discussions, and covered by a test or benchmark case.
The packaged PyPI wheel includes llm_code_validator/library_signatures.json, so users receive reviewed rule updates by upgrading the package:
pip install --upgrade llm-code-validator
Use docs/rules.md for the contribution workflow and docs/release.md for release verification.
Limitations
- Detects known API-drift rules only.
- Does not detect every possible Python, dependency, security, or runtime issue.
- Does not prove full program correctness.
- Complex dynamic imports may be missed.
- Dependency checks depend on available project metadata.
- Suggested fixes require review before applying.
- External repository findings are treated as candidates until manually reviewed.
Integrations
Pre-commit:
repos:
- repo: https://github.com/mathew-felix/llm-code-validator
rev: v0.1.0
hooks:
- id: llm-code-validator
GitHub Actions:
- run: pip install llm-code-validator
- run: llm-code-validator check . --format github
Development
Run tests:
pytest -q
Current local result:
74 passed
Run benchmarks:
python -m llm_code_validator.benchmark --dataset validation_dataset/cli_benchmark_cases.json
python -m llm_code_validator.benchmark --dataset validation_dataset/ai_stack_benchmark_cases.json
More Details
docs/demo.md: command walkthroughdocs/accuracy.md: benchmark and external-review notesdocs/rules.md: rule database notesdocs/security.md: local-only, AI-review, and policy controlsdocs/ai-review.md: optional AI-review roadmap and candidate-rule workflowdocs/release.md: release steps
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file llm_code_validator-0.1.1.tar.gz.
File metadata
- Download URL: llm_code_validator-0.1.1.tar.gz
- Upload date:
- Size: 35.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
1af2f2689be80d50e1b829674d0634a0a53e071891ea4f5144953f65aabbe256
|
|
| MD5 |
79f7ac00002dd90bb1822ed3a6318ec5
|
|
| BLAKE2b-256 |
3e5b1254d29125375338d886cce12df5b241a54c1922f66e234ea3c3cfd51e16
|
File details
Details for the file llm_code_validator-0.1.1-py3-none-any.whl.
File metadata
- Download URL: llm_code_validator-0.1.1-py3-none-any.whl
- Upload date:
- Size: 31.1 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.0
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2245696f4939183cfbdebf1591187a729640b25014bee262bfba1cf6cb98f9ff
|
|
| MD5 |
b2f9a67b1679e6792dd38d84d9ed92bb
|
|
| BLAKE2b-256 |
ff7bd1f676f52e29d750d017ee4f2c207f0051de544135d22e68d24dd2cc6b91
|