Skip to main content

CLI guardrail for catching stale Python APIs before runtime.

Project description

llm-code-validator

Python CLI for checking dependency-heavy Python projects for stale or version-incompatible third-party API usage before commit or CI.

It parses Python files with ast, checks imports and calls against a maintained API-drift rule database, and reports issues before runtime.

Default checks are local-only. No OpenAI, Anthropic, or other LLM API key is required, and the tool does not make network calls in normal use.

Current local validation: 74 tests passing, 68 API-drift rules, and PyPI install verified.

PyPI: https://pypi.org/project/llm-code-validator/

Terminal demo showing API drift diagnostics and safe fix preview

Install

pip install llm-code-validator

For local development:

git clone https://github.com/mathew-felix/llm-code-validator
cd llm-code-validator
pip install -e ".[dev]"

Quick Use

llm-code-validator check file.py
llm-code-validator check src/
llm-code-validator check --staged
llm-code-validator check src/ --format json
llm-code-validator check src/ --format github

Exit codes:

  • 0: no diagnostics
  • 1: diagnostics found
  • 2: tool error

Example

from sqlalchemy.ext.declarative import declarative_base

Base = declarative_base()
llm-code-validator check app.py
app.py:1 LCV001 warning sqlalchemy.declarative_base sqlalchemy.declarative_base is incompatible with sqlalchemy>=2.0.0
  fix: from sqlalchemy.orm import declarative_base

Preview or apply safe fixes:

llm-code-validator fix app.py
llm-code-validator fix app.py --write

What It Checks

Current rule database:

  • 68 API-drift rules
  • 15 safe fixes
  • Rules for OpenAI, Anthropic, LangChain, LangGraph, LlamaIndex, Pinecone, ChromaDB, FastAPI, Pydantic, pandas, NumPy, SQLAlchemy, Torch, and Transformers

Validate the rule database:

llm-code-validator validate-signatures

This checks source-level API migration patterns. It does not replace Ruff for linting, mypy for type checking, pip-audit for vulnerability checks, or Dependabot for dependency updates.

Security Model

By default, llm-code-validator reads local Python files, parses them with Python's built-in ast module, and compares imports and calls with the bundled rule database. It does not send source code, dependency files, environment variables, or secrets to any external service.

If optional AI-assisted review is added in the future, it should remain explicit opt-in and should minimize and redact any code snippets before a provider request.

Rule Maintenance

Public rules are reviewed before release. New rules should be added to data/library_signatures.json, backed by official evidence such as migration guides, release notes, official docs, or maintainer discussions, and covered by a test or benchmark case.

The packaged PyPI wheel includes llm_code_validator/library_signatures.json, so users receive reviewed rule updates by upgrading the package:

pip install --upgrade llm-code-validator

Use docs/rules.md for the contribution workflow and docs/release.md for release verification.

Limitations

  • Detects known API-drift rules only.
  • Does not detect every possible Python, dependency, security, or runtime issue.
  • Does not prove full program correctness.
  • Complex dynamic imports may be missed.
  • Dependency checks depend on available project metadata.
  • Suggested fixes require review before applying.
  • External repository findings are treated as candidates until manually reviewed.

Integrations

Pre-commit:

repos:
  - repo: https://github.com/mathew-felix/llm-code-validator
    rev: v0.1.0
    hooks:
      - id: llm-code-validator

GitHub Actions:

- run: pip install llm-code-validator
- run: llm-code-validator check . --format github

Development

Run tests:

pytest -q

Current local result:

74 passed

Run benchmarks:

python -m llm_code_validator.benchmark --dataset validation_dataset/cli_benchmark_cases.json
python -m llm_code_validator.benchmark --dataset validation_dataset/ai_stack_benchmark_cases.json

More Details

  • docs/demo.md: command walkthrough
  • docs/accuracy.md: benchmark and external-review notes
  • docs/rules.md: rule database notes
  • docs/security.md: local-only, AI-review, and policy controls
  • docs/ai-review.md: optional AI-review roadmap and candidate-rule workflow
  • docs/release.md: release steps

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llm_code_validator-0.1.1.tar.gz (35.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

llm_code_validator-0.1.1-py3-none-any.whl (31.1 kB view details)

Uploaded Python 3

File details

Details for the file llm_code_validator-0.1.1.tar.gz.

File metadata

  • Download URL: llm_code_validator-0.1.1.tar.gz
  • Upload date:
  • Size: 35.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.0

File hashes

Hashes for llm_code_validator-0.1.1.tar.gz
Algorithm Hash digest
SHA256 1af2f2689be80d50e1b829674d0634a0a53e071891ea4f5144953f65aabbe256
MD5 79f7ac00002dd90bb1822ed3a6318ec5
BLAKE2b-256 3e5b1254d29125375338d886cce12df5b241a54c1922f66e234ea3c3cfd51e16

See more details on using hashes here.

File details

Details for the file llm_code_validator-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for llm_code_validator-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 2245696f4939183cfbdebf1591187a729640b25014bee262bfba1cf6cb98f9ff
MD5 b2f9a67b1679e6792dd38d84d9ed92bb
BLAKE2b-256 ff7bd1f676f52e29d750d017ee4f2c207f0051de544135d22e68d24dd2cc6b91

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page