Tool-agnostic validation framework for the Manifest-driven AI Development (MAID) methodology. Validates that code artifacts align with declarative manifests.
Project description
MAID Runner
A tool-agnostic validation framework for the Manifest-driven AI Development (MAID) methodology. MAID Runner validates that code artifacts align with their declarative manifests, ensuring architectural integrity in AI-assisted development.
Introduction Video
๐น Watch the introductory video to learn about MAID Runner and the MAID methodology.
Conceptual Framework: Structural Determinism in Generative AI
1. The Core Problem: Probabilistic Entropy
Current Large Language Models (LLMs) function on Probabilistic Generation. They predict the next token based on statistical likelihood, optimizing for "plausibility" rather than correctness or architectural soundness.
- The Consequence: Without intervention, this stochastic nature inevitably leads to "AI Slop"โcode that is syntactically valid but architecturally chaotic (introducing circular dependencies, hallucinated methods, and violating SOLID principles).
- The Gap: Standard validation methods (Unit Tests) only check behavior, leaving the structure vulnerable to entropy.
2. The Solution: Dual-Constraint Validation
MAID Runner introduces a Governance Layer that enforces a "Double-Coordinate Target" for accepted code. To be valid, generation must satisfy two distinct axes simultaneously:
- Coordinate A (Behavioral): The code must pass the Test Suite (Functional Correctness).
- Coordinate B (Structural): The code must strictly adhere to a pre-designed JSON Manifest (Topological Correctness).
3. Methodology: Structural Determinism
The framework applies Structural Determinism to Probabilistic Generation.
- Search Space Restriction: By treating the software architecture as an immutable constant (via the Manifest) rather than a variable, MAID Runner drastically reduces the AI's "search space."
- The Mechanism: The AI is forced to "fill in the blanks" of a valid design rather than guessing the design itself. This ensures that even if the AI's internal logic varies, the external contract and dependency graph remain deterministic.
4. The Paradigm Shift: AI as a "Stochastic Compiler"
MAID Runner redefines the operational role of the AI Agent:
- From "Junior Developer": A creative entity that requires reactive, human-in-the-loop code review to catch errors.
- To "Stochastic Compiler": A constrained engine that translates a rigid specification (The Manifest) into implementation details.
This shifts the developer's primary activity from Prompt Engineering (persuading the AI via natural language) to Spec Engineering (defining the precise architectural boundaries the AI must respect).
5. Architectural Objective: The "Last Mile" of Reliability
By enforcing architectural topology before execution, MAID Runner solves the "Last Mile" problem of autonomous coding. It decouples Speed of Generation from Quality of Architecture, ensuring that rapid iteration does not result in technical debt.
Supported Languages
MAID Runner supports multi-language validation with production-ready parsers:
Python
- Extensions:
.py - Parser: Python AST (built-in)
- Features: Classes, functions, methods, attributes, type hints, async/await, decorators
TypeScript/JavaScript
- Extensions:
.ts,.tsx,.js,.jsx - Parser: tree-sitter (production-grade)
- Features: Classes, interfaces, type aliases, enums, namespaces, functions, methods, decorators, generics, JSX/TSX
- Framework Support: Angular, React, NestJS, Vue
- Coverage: 99.9% of TypeScript language constructs
All validation features (behavioral tests, implementation validation, snapshot generation, test stub generation) work seamlessly across both languages.
Architecture Philosophy
MAID Runner is a validation-first tool. Its core purpose is to validate that manifests, tests, and implementations comply with MAID methodology. It also provides helper commands to generate manifest snapshots and test stubs from existing code, but does not generate production code or automate the development workflow itself.
MAID Runner works with any development approachโfrom fully manual to fully automated. See Usage Modes for details.
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ External Tools (Your Choice) โ
โ - Claude Code / Aider / Cursor โ
โ - Custom AI agents โ
โ - Manual (human developers) โ
โ โ
โ Responsibilities: โ
โ โ Create manifests โ
โ โ Generate behavioral tests โ
โ โ Implement code โ
โ โ Orchestrate workflow โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ
โ Creates files
โผ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ MAID Runner (Validation-First) โ
โ โ
โ Core Responsibilities: โ
โ โ Validate manifest schema โ
โ โ Validate behavioral tests โ
โ โ Validate implementation โ
โ โ Validate type hints โ
โ โ Validate manifest chain โ
โ โ Track file compliance โ
โ โ
โ Helper Capabilities: โ
โ โ Generate manifest snapshots โ
โ โ Generate test stubs โ
โ โ
โ Boundaries: โ
โ โ No production code generation โ
โ โ No workflow automation โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Usage Modes
MAID Runner supports three development approaches, differing only in who creates the files:
1. Manual Development
- Humans write manifests, tests, and implementation
- MAID Runner validates compliance at each step
- Best for: Learning MAID, small teams, strict oversight requirements
2. Interactive AI-Assisted
- AI tools suggest code, humans review and approve
- MAID Runner validates during collaboration
- Tools: Claude Code CLI, Cursor, Aider, GitHub Copilot (MCP server coming soon)
- Best for: Faster iteration with human control
3. Fully Automated
- AI agents orchestrate entire workflow with human review checkpoints
- MAID Runner validates automatically
- Tools: Claude Code CLI (headless mode), custom AI agents, MAID Agents framework
- Best for: Large-scale development, established MAID practices
In all modes, MAID Runner provides identical validation. The workflow (manifest โ tests โ implementation โ validation) remains the same regardless of who performs each step.
Installation
Claude Code Plugin (Recommended for Claude Code Users)
For Claude Code users, install MAID Runner via the plugin marketplace:
# First, add the plugin marketplace
/plugin marketplace add aidrivencoder/claude-plugins
# Then install MAID Runner
/plugin install maid-runner@aidrivencoder
The plugin auto-installs the maid-runner PyPI package on session start and provides MAID workflow commands, specialized agents, and on-demand methodology documentationโno manual initialization required.
See the Claude Code Plugin documentation for details.
From PyPI (Standalone Usage)
For non-Claude Code environments, install MAID Runner from PyPI:
# Using pip
pip install maid-runner
# Using uv (recommended)
uv pip install maid-runner
Local Development (Editable Install)
For local development, clone the repository and install in editable mode:
# Using pip
pip install -e .
# Using uv (recommended)
uv pip install -e .
After installation, the maid command will be available:
# Check version
maid --version
# Get help
maid --help
Updating
# PyPI users: re-run maid init to update Claude files
pip install --upgrade maid-runner
maid init --force # Updates .claude/ files and CLAUDE.md
# Claude Code plugin users: updates happen automatically
The MAID Ecosystem
MAID Runner provides validation and helper utilities for manifest-driven development. For full workflow automation (planning โ testing โ implementing โ validating), check out:
MAID Agents - Automated orchestration using Claude Code agents. Handles the complete development lifecycle from idea to validated implementation.
How They Work Together
-
MAID Runner (this tool) = Validation layer
- Validates manifest schemas
- Validates implementation matches contracts
- Validates behavioral tests
- Tool-agnostic (use with any AI tool, IDE, or manually)
-
MAID Agents = Orchestration + execution layer
- Automates manifest creation
- Generates behavioral tests
- Implements code via Claude Code
- Uses MAID Runner for validation
Most users start with MAID Runner for validation, then add MAID Agents for full automation.
Python API
You can also use MAID Runner as a Python library:
from maid_runner import (
validate_schema,
validate_with_ast,
discover_related_manifests,
generate_snapshot,
AlignmentError,
__version__,
)
# Validate a manifest schema
validate_schema(manifest_data, schema_path)
# Validate implementation against manifest
validate_with_ast(manifest_data, file_path, use_manifest_chain=True)
# Generate snapshot manifest
generate_snapshot("path/to/file.py", output_dir="manifests")
Core CLI Tools (For External Tools)
1. Manifest Validation
# Validate manifest structure and implementation
maid validate <manifest_path> [options]
# Options:
# --validation-mode {implementation,behavioral} # Default: implementation
# --use-manifest-chain # Merge related manifests
# --quiet, -q # Suppress success messages
# --watch, -w # Watch mode (requires manifest path)
# --watch-all # Watch all manifests
# --skip-tests # Skip running validationCommand
# --timeout SECONDS # Command timeout (default: 300)
# Exit Codes:
# 0 = Validation passed
# 1 = Validation failed
Examples:
# Validate implementation matches manifest
$ maid validate manifests/task-013.manifest.json
โ Validation PASSED
# Validate behavioral tests USE artifacts
$ maid validate manifests/task-013.manifest.json --validation-mode behavioral
โ Behavioral test validation PASSED
# Full validation with manifest chain (recommended)
$ maid validate manifests/task-013.manifest.json --use-manifest-chain
โ Validation PASSED
# Quiet mode for automation
$ maid validate manifests/task-013.manifest.json --quiet
# Exit code 0 = success, no output
File Tracking Analysis:
When using --use-manifest-chain in implementation mode, MAID Runner performs automatic file tracking analysis to detect files not properly tracked in manifests:
$ maid validate manifests/task-013.manifest.json --use-manifest-chain
โ Validation PASSED
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
FILE TRACKING ANALYSIS
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ด UNDECLARED FILES (3 files)
Files exist in codebase but are not tracked in any manifest
- scripts/helper.py
โ Not found in any manifest
Action: Add these files to creatableFiles or editableFiles
๐ก REGISTERED FILES (5 files)
Files are tracked but not fully MAID-compliant
- utils/config.py
โ ๏ธ In editableFiles but no expectedArtifacts
Manifests: task-010
Action: Add expectedArtifacts and validationCommand
โ TRACKED (42 files)
All other source files are fully MAID-compliant
Summary: 3 UNDECLARED, 5 REGISTERED, 42 TRACKED
File Status Levels:
- ๐ด UNDECLARED: Files not in any manifest (high priority) - no audit trail
- ๐ก REGISTERED: Files tracked but incomplete compliance (medium priority) - missing artifacts/tests
- โ TRACKED: Files with full MAID compliance - properly documented and tested
This progressive compliance system helps teams migrate existing codebases to MAID while clearly identifying accountability gaps.
Watch Mode:
# Watch single manifest - re-run validation on file changes
$ maid validate manifests/task-070.manifest.json --watch
๐๏ธ Watch mode enabled for: task-070.manifest.json
๐ Watching 3 file(s) + manifest
Press Ctrl+C to stop.
๐ Running initial validation:
โ Validation PASSED
๐ Detected change in maid_runner/cli/validate.py
๐ Validating task-070.manifest.json
โ Validation PASSED
# Watch all manifests - continuous validation across codebase
$ maid validate --watch-all
๐๏ธ Multi-manifest watch mode enabled for 55 manifest(s)
๐ Watching 127 file(s)
Press Ctrl+C to stop.
# Skip test execution (validation only)
$ maid validate manifests/task-070.manifest.json --watch --skip-tests
2. Snapshot Generation
# Generate snapshot manifest from existing code
maid snapshot <file_path> [options]
# Options:
# --output-dir DIR # Default: manifests/
# --force # Overwrite without prompting
# Exit Codes:
# 0 = Snapshot created
# 1 = Error
Example:
$ maid snapshot maid_runner/validators/manifest_validator.py --force
Snapshot manifest generated successfully: manifests/task-009-snapshot-manifest_validator.manifest.json
3. System-Wide Snapshot
# Generate system-wide manifest aggregating all active manifests
maid snapshot-system [options]
# Options:
# --output FILE # Default: system.manifest.json
# --manifest-dir DIR # Default: manifests/
# --quiet, -q # Suppress informational output
# Exit Codes:
# 0 = Snapshot created
# 1 = Error
Example:
$ maid snapshot-system --output system.manifest.json
Discovered 48 active manifests (excluding 12 superseded)
Aggregated 16 files with artifacts
Deduplicated 54 validation commands
System manifest generated: system.manifest.json
Use Cases:
- Knowledge Graph Construction: Aggregate all artifacts for system-wide analysis
- Documentation Generation: Create comprehensive artifact catalog
- Migration Support: Generate baseline snapshot when adopting MAID for existing projects
- System Validation: Validate that generated system manifest is schema-compliant
4. List Manifests by File
# List all manifests that reference a file
maid manifests <file_path> [options]
# Options:
# --manifest-dir DIR # Default: manifests/
# --quiet, -q # Show minimal output (just manifest names)
# Exit Codes:
# 0 = Success (found or not found)
Examples:
# Find which manifests reference a file
$ maid manifests maid_runner/cli/main.py
Manifests referencing: maid_runner/cli/main.py
Total: 2 manifest(s)
================================================================================
โ๏ธ EDITED BY (2 manifest(s)):
- task-021-maid-test-command.manifest.json
- task-029-list-manifests-command.manifest.json
================================================================================
# Quiet mode for scripting
$ maid manifests maid_runner/validators/manifest_validator.py --quiet
created: task-001-add-schema-validation.manifest.json
edited: task-002-add-ast-alignment-validation.manifest.json
edited: task-003-behavioral-validation.manifest.json
read: task-008-snapshot-generator.manifest.json
Use Cases:
- Dependency Analysis: Find which tasks touched a file
- Impact Assessment: Understand file's role in the project (created vs edited vs read)
- Manifest Discovery: Quickly locate relevant manifests when investigating code
- Audit Trail: See the complete history of changes to a file through manifests
5. Run Validation Commands with Watch Mode
# Run validation commands from manifests
maid test [options]
# Options:
# --manifest-dir DIR # Default: manifests/
# --manifest PATH, -m PATH # Run single manifest only
# --fail-fast # Stop on first failure
# --verbose, -v # Show detailed output
# --quiet, -q # Show minimal output
# --timeout SECONDS # Command timeout (default: 300)
# --watch, -w # Watch mode for single manifest (requires --manifest)
# --watch-all # Watch all manifests and run affected tests on changes
# Exit Codes:
# 0 = All validation commands passed
# 1 = One or more validation commands failed
Important: The maid test command automatically excludes superseded manifests. Only active (non-superseded) manifests have their validationCommand executed. Superseded manifests serve as historical documentation onlyโtheir tests will not run.
Examples:
# Run all validation commands from all active manifests
$ maid test
๐ task-007-type-definitions-module.manifest.json: Running 1 validation command(s)
[1/1] pytest tests/test_task_007_type_definitions_module.py -v
โ
PASSED
...
๐ Summary: 69/69 validation commands passed (100.0%)
# Run validation commands from a single manifest
$ maid test --manifest task-063-multi-manifest-watch-mode.manifest.json
๐ task-063-multi-manifest-watch-mode.manifest.json: Running 1 validation command(s)
[1/1] pytest tests/test_task_063_multi_manifest_watch_mode.py -v
โ
PASSED
# Watch mode for single manifest (re-run on file changes)
$ maid test --manifest task-063.manifest.json --watch
๐๏ธ Watch mode enabled. Press Ctrl+C to stop.
๐ Watching 2 file(s) from manifest
๐ Running initial validation...
โ
PASSED
# File change detected automatically re-runs tests...
๐ Detected change in maid_runner/cli/test.py
๐ Re-running validation...
โ
PASSED
# Watch all manifests (multi-manifest watch mode)
$ maid test --watch-all
๐๏ธ Multi-manifest watch mode enabled. Press Ctrl+C to stop.
๐ Watching 67 file(s) across 55 manifest(s)
๐ Running initial validation for all manifests:
...
๐ Summary: 69/69 validation commands passed (100.0%)
# File change detected - only runs affected manifests...
๐ Detected change in maid_runner/cli/test.py
๐ Running validation for task-062-maid-test-watch-mode.manifest.json
โ
PASSED
๐ Running validation for task-063-multi-manifest-watch-mode.manifest.json
โ
PASSED
Watch Mode Features:
-
Single-Manifest Watch (
--watch --manifest X): Watches files from one manifest- Automatically re-runs validation commands when tracked files change
- Ideal for focused TDD workflow on a specific task
- Requires
watchdogpackage:pip install watchdog
-
Multi-Manifest Watch (
--watch-all): Watches all active manifests- Intelligently runs only affected validation commands
- Maps file changes to manifests that reference them
- Debounces rapid changes (2-second delay)
- Perfect for integration testing across multiple tasks
Use Cases:
- TDD Workflow: Keep tests running while developing (
--watch --manifest) - Continuous Validation: Monitor entire codebase for regressions (
--watch-all) - Quick Feedback: Get immediate test results without manual re-runs
- Integration Testing: Verify changes don't break dependent tasks
6. File Tracking Status
# Show file tracking status overview
maid files [options]
# Options:
# --manifest-dir DIR # Default: manifests/
# --quiet, -q # Show counts only
# Exit Codes:
# 0 = Success
Example:
$ maid files
๐ File Tracking Status
๐ด UNDECLARED: 3 files
๐ก REGISTERED: 7 files
โ TRACKED: 72 files
Total: 82 files
Quick visibility into MAID compliance across your codebase without running full validation.
Optional Human Helper Tools
For manual/interactive use, MAID Runner includes convenience wrappers in examples/maid_runner.py:
# Interactive manifest creation (optional helper)
python examples/maid_runner.py plan --goal "Add user authentication"
# Interactive validation loop (optional helper)
python examples/maid_runner.py run manifests/task-013.manifest.json
These are NOT required for automation. External AI tools should use maid validate directly.
Integration with AI Tools
MAID Runner integrates seamlessly with AI development tools in all three usage modes (see How MAID Runner Can Be Used). The examples below show how to programmatically call MAID Runner from automation scripts, AI agents, or custom tools.
Python Integration Example
import subprocess
import json
from pathlib import Path
def validate_manifest(manifest_path: str) -> dict:
"""Use MAID Runner to validate manifest."""
result = subprocess.run(
["maid", "validate", manifest_path,
"--use-manifest-chain", "--quiet"],
capture_output=True,
text=True
)
return {
"success": result.returncode == 0,
"errors": result.stderr if result.returncode != 0 else None
}
# AI tool creates manifest
manifest_path = Path("manifests/task-013-email-validation.manifest.json")
manifest_path.write_text(json.dumps({
"goal": "Add email validation",
"taskType": "create",
"creatableFiles": ["validators/email_validator.py"],
"readonlyFiles": ["tests/test_email_validation.py"],
"expectedArtifacts": {
"file": "validators/email_validator.py",
"contains": [
{"type": "class", "name": "EmailValidator"},
{"type": "function", "name": "validate", "class": "EmailValidator"}
]
},
"validationCommand": ["pytest", "tests/test_email_validation.py", "-v"]
// Enhanced format also supported:
// "validationCommands": [
// ["pytest", "tests/test_email_validation.py", "-v"],
// ["mypy", "validators/email_validator.py"]
// ]
}, indent=2))
# AI tool generates tests...
# AI tool implements code...
# Validate with MAID Runner
result = validate_manifest(str(manifest_path))
if result["success"]:
print("โ Validation passed - ready to commit")
else:
print(f"โ Validation failed: {result['errors']}")
Shell Integration Example
#!/bin/bash
# AI tool workflow script
MANIFEST="manifests/task-013-email-validation.manifest.json"
# AI creates manifest (not MAID Runner's job)
cat > $MANIFEST <<EOF
{
"goal": "Add email validation",
"taskType": "create",
"creatableFiles": ["validators/email_validator.py"],
"readonlyFiles": ["tests/test_email_validation.py"],
"expectedArtifacts": {...},
"validationCommand": ["pytest", "tests/test_email_validation.py", "-v"]
}
EOF
# AI generates tests...
# AI implements code...
# Validate with MAID Runner
if maid validate $MANIFEST --use-manifest-chain --quiet; then
echo "โ Validation passed"
exit 0
else
echo "โ Validation failed"
exit 1
fi
What MAID Runner Validates
| Validation Type | What It Checks | Command |
|---|---|---|
| Schema | Manifest JSON structure | maid validate |
| Behavioral Tests | Tests USE declared artifacts | maid validate --validation-mode behavioral |
| Implementation | Code DEFINES declared artifacts | maid validate (default) |
| Type Hints | Type annotations match manifest | maid validate (automatic) |
| Manifest Chain | Historical consistency | maid validate --use-manifest-chain |
| File References | Which manifests touch a file | maid manifests <file_path> |
Development Setup
This project uses uv for dependency management.
# Install dependencies
uv sync
# Install development dependencies
uv sync --group dev
# Install package in editable mode (after initial setup)
uv pip install -e .
Manifest Structure
Task manifests define isolated units of work with explicit inputs, outputs, and validation criteria:
{
"goal": "Implement email validation",
"taskType": "create",
"supersedes": [],
"creatableFiles": ["validators/email_validator.py"],
"editableFiles": [],
"readonlyFiles": ["tests/test_email_validation.py"],
"expectedArtifacts": {
"file": "validators/email_validator.py",
"contains": [
{
"type": "class",
"name": "EmailValidator"
},
{
"type": "function",
"name": "validate",
"class": "EmailValidator",
"parameters": [
{"name": "email", "type": "str"}
],
"returns": "bool"
}
]
},
"validationCommand": ["pytest", "tests/test_email_validation.py", "-v"]
}
Validation Modes
Strict Mode (creatableFiles):
- Implementation must EXACTLY match expectedArtifacts
- No extra public artifacts allowed
- Perfect for new files
Permissive Mode (editableFiles):
- Implementation must CONTAIN expectedArtifacts
- Extra public artifacts allowed
- Perfect for editing existing files
Supported Artifact Types
Common (Python & TypeScript)
- Classes:
{"type": "class", "name": "ClassName", "bases": ["BaseClass"]} - Functions:
{"type": "function", "name": "function_name", "parameters": [...]} - Methods:
{"type": "function", "name": "method_name", "class": "ParentClass", "parameters": [...]} - Attributes:
{"type": "attribute", "name": "attr_name", "class": "ParentClass"}
TypeScript-Specific
- Interfaces:
{"type": "interface", "name": "InterfaceName"} - Type Aliases:
{"type": "type", "name": "TypeName"} - Enums:
{"type": "enum", "name": "EnumName"} - Namespaces:
{"type": "namespace", "name": "NamespaceName"}
MAID Methodology
This project implements the MAID (Manifest-driven AI Development) methodology, which promotes:
- Explicitness over Implicitness: All AI agent context is explicitly defined
- Extreme Isolation: Tasks are isolated from the wider codebase during creation
- Test-Driven Validation: The manifest is the primary contract; tests support implementation
- Directed Dependency: One-way dependency flow following Clean Architecture
- Verifiable Chronology: Current state results from sequential manifest application
For detailed methodology documentation, see docs/maid_specs.md.
Development Workflow
This workflow applies to all usage modesโthe phases remain the same regardless of who performs them.
Phase 1: Goal Definition
Define the high-level feature or bug fix.
Phase 2: Planning Loop
- Create manifest (JSON file defining the task)
- Create behavioral tests (tests that USE the expected artifacts)
- Validate structure:
maid validate <manifest> --validation-mode behavioral - Iterate until structural validation passes
- Commit manifest and tests
Phase 3: Implementation Loop
- Implement code (create/modify files per manifest)
- Validate implementation:
maid validate <manifest> --use-manifest-chain - Run tests: Execute
validationCommandfrom manifest - Iterate until all tests pass
- Commit implementation
Phase 4: Integration
Verify complete chain: All manifests validate successfully.
Testing
# Run all tests
uv run python -m pytest tests/ -v
# Run validation tests
uv run python -m pytest tests/test_manifest_to_implementation_alignment.py -v
# Run specific task tests
uv run python -m pytest tests/test_task_011_implementation_loop_controller.py -v
Code Quality
# Format code
make format # or: uv run black .
# Lint code
make lint # or: uv run ruff check .
# Type check
make type-check
Git Pre-Commit Hooks
MAID Runner includes pre-commit hooks to automatically validate code quality and MAID compliance before each commit.
Installation
# Install pre-commit framework (already in dev dependencies)
uv sync --group dev
# Install git hooks
pre-commit install
Note: If you have a global git hooks path configured (e.g., core.hooksPath), you may see an error. In that case, integrate pre-commit into your global hooks script or run it manually:
# Run manually before commits
pre-commit run
# Or add to your global git hooks script:
# if [ -f .pre-commit-config.yaml ]; then
# pre-commit run
# fi
What the Hooks Check
On every commit, the following checks run automatically:
- Code Formatting (
black) - Ensures consistent code style - Code Linting (
ruff) - Catches common errors and style issues - MAID Validation (
maid validate) - Validates all active manifests - MAID Tests (
maid test) - Runs validation commands from manifests - Claude Files Sync (
make sync-claude) - Syncs.claude/files when modified (smart detection)
Bypassing Hooks
In exceptional cases, you can bypass hooks with:
git commit --no-verify
Note: Use sparingly. Hooks exist to prevent MAID violations and code quality issues from being committed.
Manual Hook Execution
You can run hooks manually without committing:
# Run all hooks on staged files
pre-commit run
# Run all hooks on all files
pre-commit run --all-files
# Run specific hook
pre-commit run black --all-files
Project Structure
maid-runner/
โโโ docs/ # Documentation and specifications
โโโ manifests/ # Task manifest files (chronological)
โโโ tests/ # Test suite
โโโ maid_runner/ # Main package
โ โโโ __init__.py # Package exports
โ โโโ __version__.py # Version information
โ โโโ cli/ # CLI modules
โ โ โโโ main.py # Main CLI entry point (maid command)
โ โ โโโ validate.py # Validate subcommand (with watch mode)
โ โ โโโ snapshot.py # Snapshot subcommand
โ โ โโโ list_manifests.py # Manifests subcommand
โ โ โโโ files.py # Files subcommand (tracking status)
โ โ โโโ test.py # Test subcommand (with watch mode)
โ โโโ validators/ # Core validation logic
โ โโโ manifest_validator.py # Main validation engine
โ โโโ base_validator.py # Abstract validator interface
โ โโโ python_validator.py # Python AST validator
โ โโโ typescript_validator.py # TypeScript/JavaScript validator
โ โโโ type_validator.py # Type hint validation
โ โโโ file_tracker.py # File tracking analysis
โ โโโ schemas/ # JSON schemas
โโโ examples/ # Example scripts
โ โโโ maid_runner.py # Optional helpers (plan/run)
โโโ .claude/ # Claude Code configuration
Core Components
- Manifest Validator (
validators/manifest_validator.py) - Schema and AST-based validation engine - Python Validator (
validators/python_validator.py) - Python AST-based artifact detection - TypeScript Validator (
validators/typescript_validator.py) - tree-sitter-based TypeScript/JavaScript validation - Type Validator (
validators/type_validator.py) - Type hint validation - Manifest Schema (
validators/schemas/manifest.schema.json) - JSON schema defining manifest structure - Task Manifests (
manifests/) - Chronologically ordered task definitions
FAQs
Why is there no "snapshot all files" command?
MAID is designed for incremental adoption, not mass conversion. A bulk snapshot command would:
Performance issues:
- Create thousands of manifest files (e.g., 1,317 manifests for 1,317 Python files)
- Severely degrade all MAID operations (
maid validatescans all manifests) - Generate massive git history noise
Philosophy mismatch:
- Files without manifests = files not yet touched under MAID (intentional)
- Manifests should document actual development work, not create artificial coverage
- Violates MAID's explicitness and isolation principles
How to snapshot multiple files:
# Snapshot files incrementally as you work on them
maid snapshot path/to/file.py
# Batch snapshot a specific directory if needed
for file in src/module_to_onboard/*.py; do
maid snapshot "$file" --force
done
# Discover which files lack manifests
maid validate # File tracking analysis shows undeclared files
The file tracking analysis (via maid validate) identifies undeclared files without creating manifests, supporting gradual MAID adoption.
Requirements
- Python 3.10+
- Dependencies managed via
uv - Core dependencies:
jsonschema,pytest,tree-sitter,tree-sitter-typescript - Development dependencies:
black,ruff,mypy
Exit Codes for Automation
All validation commands use standard exit codes:
0= Success (validation passed)1= Failure (validation failed or error occurred)
Use --quiet flag to suppress success messages for clean automation.
Contributing
This project dogfoods the MAID methodology. All changes must:
- Have a manifest in
manifests/ - Have behavioral tests in
tests/ - Pass structural validation
- Pass behavioral tests
See CLAUDE.md for development guidelines.
License
This project implements the MAID methodology for research and development purposes.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file maid_runner-0.9.0.tar.gz.
File metadata
- Download URL: maid_runner-0.9.0.tar.gz
- Upload date:
- Size: 497.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
5fe670226c6d840a59eb589102953585678532e592b97fe735e287f137cfa52d
|
|
| MD5 |
d64a8f96d38cfbfb2ed974ccc4ad7d84
|
|
| BLAKE2b-256 |
930a0f113f8a6c24d55fbbefa8bec51289f7116ad686125fa7607babcf9f55f4
|
Provenance
The following attestation bundles were made for maid_runner-0.9.0.tar.gz:
Publisher:
publish.yml on mamertofabian/maid-runner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
maid_runner-0.9.0.tar.gz -
Subject digest:
5fe670226c6d840a59eb589102953585678532e592b97fe735e287f137cfa52d - Sigstore transparency entry: 804595763
- Sigstore integration time:
-
Permalink:
mamertofabian/maid-runner@43bdcf00d4075c9f643c419f899ae2e3932e360a -
Branch / Tag:
refs/tags/v0.9.0 - Owner: https://github.com/mamertofabian
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@43bdcf00d4075c9f643c419f899ae2e3932e360a -
Trigger Event:
push
-
Statement type:
File details
Details for the file maid_runner-0.9.0-py3-none-any.whl.
File metadata
- Download URL: maid_runner-0.9.0-py3-none-any.whl
- Upload date:
- Size: 225.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3a6170af33b264959257ec59a926543f83aa89a798efc407476d962c612d81a6
|
|
| MD5 |
f08c0410afcd931679405f56e532eb22
|
|
| BLAKE2b-256 |
1acb6b1503d982e45536f23405f6b7ed999ec23beb2afd7e07d19c4fffedab73
|
Provenance
The following attestation bundles were made for maid_runner-0.9.0-py3-none-any.whl:
Publisher:
publish.yml on mamertofabian/maid-runner
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
maid_runner-0.9.0-py3-none-any.whl -
Subject digest:
3a6170af33b264959257ec59a926543f83aa89a798efc407476d962c612d81a6 - Sigstore transparency entry: 804595766
- Sigstore integration time:
-
Permalink:
mamertofabian/maid-runner@43bdcf00d4075c9f643c419f899ae2e3932e360a -
Branch / Tag:
refs/tags/v0.9.0 - Owner: https://github.com/mamertofabian
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@43bdcf00d4075c9f643c419f899ae2e3932e360a -
Trigger Event:
push
-
Statement type: