Skip to main content

Tool-agnostic validation framework for the Manifest-driven AI Development (MAID) methodology. Validates that code artifacts align with declarative manifests.

Project description

MAID Runner

PyPI version Python Version License: MIT

A tool-agnostic validation framework for the Manifest-driven AI Development (MAID) methodology. MAID Runner validates that code artifacts align with their declarative manifests, ensuring architectural integrity in AI-assisted development.

Introduction Video

๐Ÿ“น Watch the introductory video to learn about MAID Runner and the MAID methodology.

Conceptual Framework: Structural Determinism in Generative AI

1. The Core Problem: Probabilistic Entropy

Current Large Language Models (LLMs) function on Probabilistic Generation. They predict the next token based on statistical likelihood, optimizing for "plausibility" rather than correctness or architectural soundness.

  • The Consequence: Without intervention, this stochastic nature inevitably leads to "AI Slop"โ€”code that is syntactically valid but architecturally chaotic (introducing circular dependencies, hallucinated methods, and violating SOLID principles).
  • The Gap: Standard validation methods (Unit Tests) only check behavior, leaving the structure vulnerable to entropy.

2. The Solution: Dual-Constraint Validation

MAID Runner introduces a Governance Layer that enforces a "Double-Coordinate Target" for accepted code. To be valid, generation must satisfy two distinct axes simultaneously:

  • Coordinate A (Behavioral): The code must pass the Test Suite (Functional Correctness).
  • Coordinate B (Structural): The code must strictly adhere to a pre-designed JSON Manifest (Topological Correctness).

3. Methodology: Structural Determinism

The framework applies Structural Determinism to Probabilistic Generation.

  • Search Space Restriction: By treating the software architecture as an immutable constant (via the Manifest) rather than a variable, MAID Runner drastically reduces the AI's "search space."
  • The Mechanism: The AI is forced to "fill in the blanks" of a valid design rather than guessing the design itself. This ensures that even if the AI's internal logic varies, the external contract and dependency graph remain deterministic.

4. The Paradigm Shift: AI as a "Stochastic Compiler"

MAID Runner redefines the operational role of the AI Agent:

  • From "Junior Developer": A creative entity that requires reactive, human-in-the-loop code review to catch errors.
  • To "Stochastic Compiler": A constrained engine that translates a rigid specification (The Manifest) into implementation details.

This shifts the developer's primary activity from Prompt Engineering (persuading the AI via natural language) to Spec Engineering (defining the precise architectural boundaries the AI must respect).

5. Architectural Objective: The "Last Mile" of Reliability

By enforcing architectural topology before execution, MAID Runner solves the "Last Mile" problem of autonomous coding. It decouples Speed of Generation from Quality of Architecture, ensuring that rapid iteration does not result in technical debt.

Supported Languages

MAID Runner supports multi-language validation with production-ready parsers:

Python

  • Extensions: .py
  • Parser: Python AST (built-in)
  • Features: Classes, functions, methods, attributes, type hints, async/await, decorators

TypeScript/JavaScript

  • Extensions: .ts, .tsx, .js, .jsx
  • Parser: tree-sitter (production-grade)
  • Features: Classes, interfaces, type aliases, enums, namespaces, functions, methods, decorators, generics, JSX/TSX
  • Framework Support: Angular, React, NestJS, Vue
  • Coverage: 99.9% of TypeScript language constructs

All validation features (behavioral tests, implementation validation, snapshot generation, test stub generation) work seamlessly across both languages.

Architecture Philosophy

MAID Runner is a validation-first tool. Its core purpose is to validate that manifests, tests, and implementations comply with MAID methodology. It also provides helper commands to generate manifest snapshots and test stubs from existing code, but does not generate production code or automate the development workflow itself.

MAID Runner works with any development approachโ€”from fully manual to fully automated. See Usage Modes for details.

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   External Tools (Your Choice)       โ”‚
โ”‚   - Claude Code / Aider / Cursor     โ”‚
โ”‚   - Custom AI agents                 โ”‚
โ”‚   - Manual (human developers)        โ”‚
โ”‚                                      โ”‚
โ”‚   Responsibilities:                  โ”‚
โ”‚   โœ“ Create manifests                 โ”‚
โ”‚   โœ“ Generate behavioral tests        โ”‚
โ”‚   โœ“ Implement code                   โ”‚
โ”‚   โœ“ Orchestrate workflow             โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
              โ”‚
              โ”‚ Creates files
              โ–ผ
โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   MAID Runner (Validation-First)    โ”‚
โ”‚                                      โ”‚
โ”‚   Core Responsibilities:             โ”‚
โ”‚   โœ“ Validate manifest schema         โ”‚
โ”‚   โœ“ Validate behavioral tests        โ”‚
โ”‚   โœ“ Validate implementation          โ”‚
โ”‚   โœ“ Validate type hints              โ”‚
โ”‚   โœ“ Validate manifest chain          โ”‚
โ”‚   โœ“ Track file compliance            โ”‚
โ”‚                                      โ”‚
โ”‚   Helper Capabilities:               โ”‚
โ”‚   โœ“ Generate manifest snapshots      โ”‚
โ”‚   โœ“ Generate test stubs              โ”‚
โ”‚                                      โ”‚
โ”‚   Boundaries:                        โ”‚
โ”‚   โœ— No production code generation    โ”‚
โ”‚   โœ— No workflow automation           โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Usage Modes

MAID Runner supports three development approaches, differing only in who creates the files:

1. Manual Development

  • Humans write manifests, tests, and implementation
  • MAID Runner validates compliance at each step
  • Best for: Learning MAID, small teams, strict oversight requirements

2. Interactive AI-Assisted

  • AI tools suggest code, humans review and approve
  • MAID Runner validates during collaboration
  • Tools: Claude Code CLI, Cursor, Aider, GitHub Copilot (MCP server coming soon)
  • Best for: Faster iteration with human control

3. Fully Automated

  • AI agents orchestrate entire workflow with human review checkpoints
  • MAID Runner validates automatically
  • Tools: Claude Code CLI (headless mode), custom AI agents, MAID Agents framework
  • Best for: Large-scale development, established MAID practices

In all modes, MAID Runner provides identical validation. The workflow (manifest โ†’ tests โ†’ implementation โ†’ validation) remains the same regardless of who performs each step.

Installation

Claude Code Plugin (Recommended for Claude Code Users)

For Claude Code users, install MAID Runner via the plugin marketplace:

# First, add the plugin marketplace
/plugin marketplace add aidrivencoder/claude-plugins

# Then install MAID Runner
/plugin install maid-runner@aidrivencoder

The plugin auto-installs the maid-runner PyPI package on session start and provides MAID workflow commands, specialized agents, and on-demand methodology documentationโ€”no manual initialization required.

See the Claude Code Plugin documentation for details.

From PyPI (Standalone Usage)

For non-Claude Code environments, install MAID Runner from PyPI:

# Using pip
pip install maid-runner

# Using uv (recommended)
uv pip install maid-runner

Local Development (Editable Install)

For local development, clone the repository and install in editable mode:

# Using pip
pip install -e .

# Using uv (recommended)
uv pip install -e .

After installation, the maid command will be available:

# Check version
maid --version

# Get help
maid --help

Updating

# PyPI users: re-run maid init to update Claude files
pip install --upgrade maid-runner
maid init --force  # Updates .claude/ files and CLAUDE.md

# Claude Code plugin users: updates happen automatically

The MAID Ecosystem

MAID Runner provides validation and helper utilities for manifest-driven development. For full workflow automation (planning โ†’ testing โ†’ implementing โ†’ validating), check out:

MAID Agents - Automated orchestration using Claude Code agents. Handles the complete development lifecycle from idea to validated implementation.

How They Work Together

  • MAID Runner (this tool) = Validation layer

    • Validates manifest schemas
    • Validates implementation matches contracts
    • Validates behavioral tests
    • Tool-agnostic (use with any AI tool, IDE, or manually)
  • MAID Agents = Orchestration + execution layer

    • Automates manifest creation
    • Generates behavioral tests
    • Implements code via Claude Code
    • Uses MAID Runner for validation

Most users start with MAID Runner for validation, then add MAID Agents for full automation.

Python API

You can also use MAID Runner as a Python library:

from maid_runner import (
    validate_schema,
    validate_with_ast,
    discover_related_manifests,
    generate_snapshot,
    AlignmentError,
    __version__,
)

# Validate a manifest schema
validate_schema(manifest_data, schema_path)

# Validate implementation against manifest
validate_with_ast(manifest_data, file_path, use_manifest_chain=True)

# Generate snapshot manifest
generate_snapshot("path/to/file.py", output_dir="manifests")

Core CLI Tools (For External Tools)

1. Manifest Validation

# Validate manifest structure and implementation
maid validate <manifest_path> [options]

# Options:
#   --validation-mode {implementation,behavioral}  # Default: implementation
#   --use-manifest-chain                          # Merge related manifests
#   --quiet, -q                                    # Suppress success messages
#   --watch, -w                                    # Watch mode (requires manifest path)
#   --watch-all                                    # Watch all manifests
#   --skip-tests                                   # Skip running validationCommand
#   --timeout SECONDS                              # Command timeout (default: 300)

# Exit Codes:
#   0 = Validation passed
#   1 = Validation failed

Examples:

# Validate implementation matches manifest
$ maid validate manifests/task-013.manifest.json
โœ“ Validation PASSED

# Validate behavioral tests USE artifacts
$ maid validate manifests/task-013.manifest.json --validation-mode behavioral
โœ“ Behavioral test validation PASSED

# Full validation with manifest chain (recommended)
$ maid validate manifests/task-013.manifest.json --use-manifest-chain
โœ“ Validation PASSED

# Quiet mode for automation
$ maid validate manifests/task-013.manifest.json --quiet
# Exit code 0 = success, no output

File Tracking Analysis:

When using --use-manifest-chain in implementation mode, MAID Runner performs automatic file tracking analysis to detect files not properly tracked in manifests:

$ maid validate manifests/task-013.manifest.json --use-manifest-chain

โœ“ Validation PASSED

โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”
FILE TRACKING ANALYSIS
โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”

๐Ÿ”ด UNDECLARED FILES (3 files)
  Files exist in codebase but are not tracked in any manifest

  - scripts/helper.py
    โ†’ Not found in any manifest

  Action: Add these files to creatableFiles or editableFiles

๐ŸŸก REGISTERED FILES (5 files)
  Files are tracked but not fully MAID-compliant

  - utils/config.py
    โš ๏ธ  In editableFiles but no expectedArtifacts
    Manifests: task-010

  Action: Add expectedArtifacts and validationCommand

โœ“ TRACKED (42 files)
  All other source files are fully MAID-compliant

Summary: 3 UNDECLARED, 5 REGISTERED, 42 TRACKED

File Status Levels:

  • ๐Ÿ”ด UNDECLARED: Files not in any manifest (high priority) - no audit trail
  • ๐ŸŸก REGISTERED: Files tracked but incomplete compliance (medium priority) - missing artifacts/tests
  • โœ“ TRACKED: Files with full MAID compliance - properly documented and tested

This progressive compliance system helps teams migrate existing codebases to MAID while clearly identifying accountability gaps.

Watch Mode:

# Watch single manifest - re-run validation on file changes
$ maid validate manifests/task-070.manifest.json --watch
๐Ÿ‘๏ธ  Watch mode enabled for: task-070.manifest.json
๐Ÿ‘€ Watching 3 file(s) + manifest
Press Ctrl+C to stop.

๐Ÿ“‹ Running initial validation:
โœ“ Validation PASSED

๐Ÿ”” Detected change in maid_runner/cli/validate.py
๐Ÿ“‹ Validating task-070.manifest.json
โœ“ Validation PASSED

# Watch all manifests - continuous validation across codebase
$ maid validate --watch-all
๐Ÿ‘๏ธ  Multi-manifest watch mode enabled for 55 manifest(s)
๐Ÿ‘€ Watching 127 file(s)
Press Ctrl+C to stop.

# Skip test execution (validation only)
$ maid validate manifests/task-070.manifest.json --watch --skip-tests

2. Snapshot Generation

# Generate snapshot manifest from existing code
maid snapshot <file_path> [options]

# Options:
#   --output-dir DIR    # Default: manifests/
#   --force            # Overwrite without prompting

# Exit Codes:
#   0 = Snapshot created
#   1 = Error

Example:

$ maid snapshot maid_runner/validators/manifest_validator.py --force
Snapshot manifest generated successfully: manifests/task-009-snapshot-manifest_validator.manifest.json

3. System-Wide Snapshot

# Generate system-wide manifest aggregating all active manifests
maid snapshot-system [options]

# Options:
#   --output FILE           # Default: system.manifest.json
#   --manifest-dir DIR      # Default: manifests/
#   --quiet, -q            # Suppress informational output

# Exit Codes:
#   0 = Snapshot created
#   1 = Error

Example:

$ maid snapshot-system --output system.manifest.json
Discovered 48 active manifests (excluding 12 superseded)
Aggregated 16 files with artifacts
Deduplicated 54 validation commands

System manifest generated: system.manifest.json

Use Cases:

  • Knowledge Graph Construction: Aggregate all artifacts for system-wide analysis
  • Documentation Generation: Create comprehensive artifact catalog
  • Migration Support: Generate baseline snapshot when adopting MAID for existing projects
  • System Validation: Validate that generated system manifest is schema-compliant

4. List Manifests by File

# List all manifests that reference a file
maid manifests <file_path> [options]

# Options:
#   --manifest-dir DIR  # Default: manifests/
#   --quiet, -q         # Show minimal output (just manifest names)

# Exit Codes:
#   0 = Success (found or not found)

Examples:

# Find which manifests reference a file
$ maid manifests maid_runner/cli/main.py

Manifests referencing: maid_runner/cli/main.py
Total: 2 manifest(s)

================================================================================

โœ๏ธ  EDITED BY (2 manifest(s)):
  - task-021-maid-test-command.manifest.json
  - task-029-list-manifests-command.manifest.json

================================================================================

# Quiet mode for scripting
$ maid manifests maid_runner/validators/manifest_validator.py --quiet
created: task-001-add-schema-validation.manifest.json
edited: task-002-add-ast-alignment-validation.manifest.json
edited: task-003-behavioral-validation.manifest.json
read: task-008-snapshot-generator.manifest.json

Use Cases:

  • Dependency Analysis: Find which tasks touched a file
  • Impact Assessment: Understand file's role in the project (created vs edited vs read)
  • Manifest Discovery: Quickly locate relevant manifests when investigating code
  • Audit Trail: See the complete history of changes to a file through manifests

5. Run Validation Commands with Watch Mode

# Run validation commands from manifests
maid test [options]

# Options:
#   --manifest-dir DIR       # Default: manifests/
#   --manifest PATH, -m PATH # Run single manifest only
#   --fail-fast              # Stop on first failure
#   --verbose, -v            # Show detailed output
#   --quiet, -q              # Show minimal output
#   --timeout SECONDS        # Command timeout (default: 300)
#   --watch, -w              # Watch mode for single manifest (requires --manifest)
#   --watch-all              # Watch all manifests and run affected tests on changes

# Exit Codes:
#   0 = All validation commands passed
#   1 = One or more validation commands failed

Important: The maid test command automatically excludes superseded manifests. Only active (non-superseded) manifests have their validationCommand executed. Superseded manifests serve as historical documentation onlyโ€”their tests will not run.

Examples:

# Run all validation commands from all active manifests
$ maid test
๐Ÿ“‹ task-007-type-definitions-module.manifest.json: Running 1 validation command(s)
  [1/1] pytest tests/test_task_007_type_definitions_module.py -v
    โœ… PASSED
...
๐Ÿ“Š Summary: 69/69 validation commands passed (100.0%)

# Run validation commands from a single manifest
$ maid test --manifest task-063-multi-manifest-watch-mode.manifest.json
๐Ÿ“‹ task-063-multi-manifest-watch-mode.manifest.json: Running 1 validation command(s)
  [1/1] pytest tests/test_task_063_multi_manifest_watch_mode.py -v
    โœ… PASSED

# Watch mode for single manifest (re-run on file changes)
$ maid test --manifest task-063.manifest.json --watch
๐Ÿ‘๏ธ  Watch mode enabled. Press Ctrl+C to stop.
๐Ÿ‘€ Watching 2 file(s) from manifest

๐Ÿ“‹ Running initial validation...
  โœ… PASSED

# File change detected automatically re-runs tests...
๐Ÿ”” Detected change in maid_runner/cli/test.py
๐Ÿ“‹ Re-running validation...
  โœ… PASSED

# Watch all manifests (multi-manifest watch mode)
$ maid test --watch-all
๐Ÿ‘๏ธ  Multi-manifest watch mode enabled. Press Ctrl+C to stop.
๐Ÿ‘€ Watching 67 file(s) across 55 manifest(s)

๐Ÿ“‹ Running initial validation for all manifests:
...
๐Ÿ“Š Summary: 69/69 validation commands passed (100.0%)

# File change detected - only runs affected manifests...
๐Ÿ”” Detected change in maid_runner/cli/test.py
๐Ÿ“‹ Running validation for task-062-maid-test-watch-mode.manifest.json
  โœ… PASSED
๐Ÿ“‹ Running validation for task-063-multi-manifest-watch-mode.manifest.json
  โœ… PASSED

Watch Mode Features:

  • Single-Manifest Watch (--watch --manifest X): Watches files from one manifest

    • Automatically re-runs validation commands when tracked files change
    • Ideal for focused TDD workflow on a specific task
    • Requires watchdog package: pip install watchdog
  • Multi-Manifest Watch (--watch-all): Watches all active manifests

    • Intelligently runs only affected validation commands
    • Maps file changes to manifests that reference them
    • Debounces rapid changes (2-second delay)
    • Perfect for integration testing across multiple tasks

Use Cases:

  • TDD Workflow: Keep tests running while developing (--watch --manifest)
  • Continuous Validation: Monitor entire codebase for regressions (--watch-all)
  • Quick Feedback: Get immediate test results without manual re-runs
  • Integration Testing: Verify changes don't break dependent tasks

6. File Tracking Status

# Show file tracking status overview
maid files [options]

# Options:
#   --manifest-dir DIR  # Default: manifests/
#   --quiet, -q         # Show counts only

# Exit Codes:
#   0 = Success

Example:

$ maid files
๐Ÿ“Š File Tracking Status

๐Ÿ”ด UNDECLARED: 3 files
๐ŸŸก REGISTERED: 7 files
โœ“  TRACKED: 72 files

Total: 82 files

Quick visibility into MAID compliance across your codebase without running full validation.

Optional Human Helper Tools

For manual/interactive use, MAID Runner includes convenience wrappers in examples/maid_runner.py:

# Interactive manifest creation (optional helper)
python examples/maid_runner.py plan --goal "Add user authentication"

# Interactive validation loop (optional helper)
python examples/maid_runner.py run manifests/task-013.manifest.json

These are NOT required for automation. External AI tools should use maid validate directly.

Integration with AI Tools

MAID Runner integrates seamlessly with AI development tools in all three usage modes (see How MAID Runner Can Be Used). The examples below show how to programmatically call MAID Runner from automation scripts, AI agents, or custom tools.

Python Integration Example

import subprocess
import json
from pathlib import Path

def validate_manifest(manifest_path: str) -> dict:
    """Use MAID Runner to validate manifest."""
    result = subprocess.run(
        ["maid", "validate", manifest_path,
         "--use-manifest-chain", "--quiet"],
        capture_output=True,
        text=True
    )

    return {
        "success": result.returncode == 0,
        "errors": result.stderr if result.returncode != 0 else None
    }

# AI tool creates manifest
manifest_path = Path("manifests/task-013-email-validation.manifest.json")
manifest_path.write_text(json.dumps({
    "goal": "Add email validation",
    "taskType": "create",
    "creatableFiles": ["validators/email_validator.py"],
    "readonlyFiles": ["tests/test_email_validation.py"],
    "expectedArtifacts": {
        "file": "validators/email_validator.py",
        "contains": [
            {"type": "class", "name": "EmailValidator"},
            {"type": "function", "name": "validate", "class": "EmailValidator"}
        ]
    },
    "validationCommand": ["pytest", "tests/test_email_validation.py", "-v"]
    // Enhanced format also supported:
    // "validationCommands": [
    //   ["pytest", "tests/test_email_validation.py", "-v"],
    //   ["mypy", "validators/email_validator.py"]
    // ]
}, indent=2))

# AI tool generates tests...
# AI tool implements code...

# Validate with MAID Runner
result = validate_manifest(str(manifest_path))
if result["success"]:
    print("โœ“ Validation passed - ready to commit")
else:
    print(f"โœ— Validation failed: {result['errors']}")

Shell Integration Example

#!/bin/bash
# AI tool workflow script

MANIFEST="manifests/task-013-email-validation.manifest.json"

# AI creates manifest (not MAID Runner's job)
cat > $MANIFEST <<EOF
{
  "goal": "Add email validation",
  "taskType": "create",
  "creatableFiles": ["validators/email_validator.py"],
  "readonlyFiles": ["tests/test_email_validation.py"],
  "expectedArtifacts": {...},
  "validationCommand": ["pytest", "tests/test_email_validation.py", "-v"]
}
EOF

# AI generates tests...
# AI implements code...

# Validate with MAID Runner
if maid validate $MANIFEST --use-manifest-chain --quiet; then
    echo "โœ“ Validation passed"
    exit 0
else
    echo "โœ— Validation failed"
    exit 1
fi

What MAID Runner Validates

Validation Type What It Checks Command
Schema Manifest JSON structure maid validate
Behavioral Tests Tests USE declared artifacts maid validate --validation-mode behavioral
Implementation Code DEFINES declared artifacts maid validate (default)
Type Hints Type annotations match manifest maid validate (automatic)
Manifest Chain Historical consistency maid validate --use-manifest-chain
File References Which manifests touch a file maid manifests <file_path>

Development Setup

This project uses uv for dependency management.

# Install dependencies
uv sync

# Install development dependencies
uv sync --group dev

# Install package in editable mode (after initial setup)
uv pip install -e .

Manifest Structure

Task manifests define isolated units of work with explicit inputs, outputs, and validation criteria:

{
  "goal": "Implement email validation",
  "taskType": "create",
  "supersedes": [],
  "creatableFiles": ["validators/email_validator.py"],
  "editableFiles": [],
  "readonlyFiles": ["tests/test_email_validation.py"],
  "expectedArtifacts": {
    "file": "validators/email_validator.py",
    "contains": [
      {
        "type": "class",
        "name": "EmailValidator"
      },
      {
        "type": "function",
        "name": "validate",
        "class": "EmailValidator",
        "parameters": [
          {"name": "email", "type": "str"}
        ],
        "returns": "bool"
      }
    ]
  },
  "validationCommand": ["pytest", "tests/test_email_validation.py", "-v"]
}

Validation Modes

Strict Mode (creatableFiles):

  • Implementation must EXACTLY match expectedArtifacts
  • No extra public artifacts allowed
  • Perfect for new files

Permissive Mode (editableFiles):

  • Implementation must CONTAIN expectedArtifacts
  • Extra public artifacts allowed
  • Perfect for editing existing files

Supported Artifact Types

Common (Python & TypeScript)

  • Classes: {"type": "class", "name": "ClassName", "bases": ["BaseClass"]}
  • Functions: {"type": "function", "name": "function_name", "parameters": [...]}
  • Methods: {"type": "function", "name": "method_name", "class": "ParentClass", "parameters": [...]}
  • Attributes: {"type": "attribute", "name": "attr_name", "class": "ParentClass"}

TypeScript-Specific

  • Interfaces: {"type": "interface", "name": "InterfaceName"}
  • Type Aliases: {"type": "type", "name": "TypeName"}
  • Enums: {"type": "enum", "name": "EnumName"}
  • Namespaces: {"type": "namespace", "name": "NamespaceName"}

MAID Methodology

This project implements the MAID (Manifest-driven AI Development) methodology, which promotes:

  • Explicitness over Implicitness: All AI agent context is explicitly defined
  • Extreme Isolation: Tasks are isolated from the wider codebase during creation
  • Test-Driven Validation: The manifest is the primary contract; tests support implementation
  • Directed Dependency: One-way dependency flow following Clean Architecture
  • Verifiable Chronology: Current state results from sequential manifest application

For detailed methodology documentation, see docs/maid_specs.md.

Development Workflow

This workflow applies to all usage modesโ€”the phases remain the same regardless of who performs them.

Phase 1: Goal Definition

Define the high-level feature or bug fix.

Phase 2: Planning Loop

  1. Create manifest (JSON file defining the task)
  2. Create behavioral tests (tests that USE the expected artifacts)
  3. Validate structure: maid validate <manifest> --validation-mode behavioral
  4. Iterate until structural validation passes
  5. Commit manifest and tests

Phase 3: Implementation Loop

  1. Implement code (create/modify files per manifest)
  2. Validate implementation: maid validate <manifest> --use-manifest-chain
  3. Run tests: Execute validationCommand from manifest
  4. Iterate until all tests pass
  5. Commit implementation

Phase 4: Integration

Verify complete chain: All manifests validate successfully.

Testing

# Run all tests
uv run python -m pytest tests/ -v

# Run validation tests
uv run python -m pytest tests/test_manifest_to_implementation_alignment.py -v

# Run specific task tests
uv run python -m pytest tests/test_task_011_implementation_loop_controller.py -v

Code Quality

# Format code
make format  # or: uv run black .

# Lint code
make lint    # or: uv run ruff check .

# Type check
make type-check

Git Pre-Commit Hooks

MAID Runner includes pre-commit hooks to automatically validate code quality and MAID compliance before each commit.

Installation

# Install pre-commit framework (already in dev dependencies)
uv sync --group dev

# Install git hooks
pre-commit install

Note: If you have a global git hooks path configured (e.g., core.hooksPath), you may see an error. In that case, integrate pre-commit into your global hooks script or run it manually:

# Run manually before commits
pre-commit run

# Or add to your global git hooks script:
# if [ -f .pre-commit-config.yaml ]; then
#     pre-commit run
# fi

What the Hooks Check

On every commit, the following checks run automatically:

  1. Code Formatting (black) - Ensures consistent code style
  2. Code Linting (ruff) - Catches common errors and style issues
  3. MAID Validation (maid validate) - Validates all active manifests
  4. MAID Tests (maid test) - Runs validation commands from manifests
  5. Claude Files Sync (make sync-claude) - Syncs .claude/ files when modified (smart detection)

Bypassing Hooks

In exceptional cases, you can bypass hooks with:

git commit --no-verify

Note: Use sparingly. Hooks exist to prevent MAID violations and code quality issues from being committed.

Manual Hook Execution

You can run hooks manually without committing:

# Run all hooks on staged files
pre-commit run

# Run all hooks on all files
pre-commit run --all-files

# Run specific hook
pre-commit run black --all-files

Project Structure

maid-runner/
โ”œโ”€โ”€ docs/                          # Documentation and specifications
โ”œโ”€โ”€ manifests/                     # Task manifest files (chronological)
โ”œโ”€โ”€ tests/                         # Test suite
โ”œโ”€โ”€ maid_runner/                   # Main package
โ”‚   โ”œโ”€โ”€ __init__.py                # Package exports
โ”‚   โ”œโ”€โ”€ __version__.py             # Version information
โ”‚   โ”œโ”€โ”€ cli/                        # CLI modules
โ”‚   โ”‚   โ”œโ”€โ”€ main.py                # Main CLI entry point (maid command)
โ”‚   โ”‚   โ”œโ”€โ”€ validate.py            # Validate subcommand (with watch mode)
โ”‚   โ”‚   โ”œโ”€โ”€ snapshot.py            # Snapshot subcommand
โ”‚   โ”‚   โ”œโ”€โ”€ list_manifests.py      # Manifests subcommand
โ”‚   โ”‚   โ”œโ”€โ”€ files.py               # Files subcommand (tracking status)
โ”‚   โ”‚   โ””โ”€โ”€ test.py                # Test subcommand (with watch mode)
โ”‚   โ””โ”€โ”€ validators/                # Core validation logic
โ”‚       โ”œโ”€โ”€ manifest_validator.py  # Main validation engine
โ”‚       โ”œโ”€โ”€ base_validator.py      # Abstract validator interface
โ”‚       โ”œโ”€โ”€ python_validator.py    # Python AST validator
โ”‚       โ”œโ”€โ”€ typescript_validator.py # TypeScript/JavaScript validator
โ”‚       โ”œโ”€โ”€ type_validator.py      # Type hint validation
โ”‚       โ”œโ”€โ”€ file_tracker.py        # File tracking analysis
โ”‚       โ””โ”€โ”€ schemas/               # JSON schemas
โ”œโ”€โ”€ examples/                      # Example scripts
โ”‚   โ””โ”€โ”€ maid_runner.py             # Optional helpers (plan/run)
โ””โ”€โ”€ .claude/                       # Claude Code configuration

Core Components

  • Manifest Validator (validators/manifest_validator.py) - Schema and AST-based validation engine
  • Python Validator (validators/python_validator.py) - Python AST-based artifact detection
  • TypeScript Validator (validators/typescript_validator.py) - tree-sitter-based TypeScript/JavaScript validation
  • Type Validator (validators/type_validator.py) - Type hint validation
  • Manifest Schema (validators/schemas/manifest.schema.json) - JSON schema defining manifest structure
  • Task Manifests (manifests/) - Chronologically ordered task definitions

FAQs

Why is there no "snapshot all files" command?

MAID is designed for incremental adoption, not mass conversion. A bulk snapshot command would:

Performance issues:

  • Create thousands of manifest files (e.g., 1,317 manifests for 1,317 Python files)
  • Severely degrade all MAID operations (maid validate scans all manifests)
  • Generate massive git history noise

Philosophy mismatch:

  • Files without manifests = files not yet touched under MAID (intentional)
  • Manifests should document actual development work, not create artificial coverage
  • Violates MAID's explicitness and isolation principles

How to snapshot multiple files:

# Snapshot files incrementally as you work on them
maid snapshot path/to/file.py

# Batch snapshot a specific directory if needed
for file in src/module_to_onboard/*.py; do
  maid snapshot "$file" --force
done

# Discover which files lack manifests
maid validate  # File tracking analysis shows undeclared files

The file tracking analysis (via maid validate) identifies undeclared files without creating manifests, supporting gradual MAID adoption.

Requirements

  • Python 3.10+
  • Dependencies managed via uv
  • Core dependencies: jsonschema, pytest, tree-sitter, tree-sitter-typescript
  • Development dependencies: black, ruff, mypy

Exit Codes for Automation

All validation commands use standard exit codes:

  • 0 = Success (validation passed)
  • 1 = Failure (validation failed or error occurred)

Use --quiet flag to suppress success messages for clean automation.

Contributing

This project dogfoods the MAID methodology. All changes must:

  1. Have a manifest in manifests/
  2. Have behavioral tests in tests/
  3. Pass structural validation
  4. Pass behavioral tests

See CLAUDE.md for development guidelines.

License

This project implements the MAID methodology for research and development purposes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

maid_runner-0.8.0.tar.gz (486.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

maid_runner-0.8.0-py3-none-any.whl (221.9 kB view details)

Uploaded Python 3

File details

Details for the file maid_runner-0.8.0.tar.gz.

File metadata

  • Download URL: maid_runner-0.8.0.tar.gz
  • Upload date:
  • Size: 486.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for maid_runner-0.8.0.tar.gz
Algorithm Hash digest
SHA256 af0b2ebe10a17dceaad59db5a26e10357b7ceefe02fd92f57ba949b52d29c80a
MD5 7581e41169a7fc92ec6c15ce019bf42b
BLAKE2b-256 d88342a0cdb32ea8ebf2d42a27a8bab304989c7406c39bf78f6e8c13fbf1ca11

See more details on using hashes here.

Provenance

The following attestation bundles were made for maid_runner-0.8.0.tar.gz:

Publisher: publish.yml on mamertofabian/maid-runner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file maid_runner-0.8.0-py3-none-any.whl.

File metadata

  • Download URL: maid_runner-0.8.0-py3-none-any.whl
  • Upload date:
  • Size: 221.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for maid_runner-0.8.0-py3-none-any.whl
Algorithm Hash digest
SHA256 59971f07d68e5241c77135c94d1e74a856a8999eb556932a0bac516966a24d9f
MD5 48d7f5a2134c9381cdf3df5804862ffd
BLAKE2b-256 002c14ff6cf31d621bed633a9159793653a8885ee08b135585215fe7a26fe289

See more details on using hashes here.

Provenance

The following attestation bundles were made for maid_runner-0.8.0-py3-none-any.whl:

Publisher: publish.yml on mamertofabian/maid-runner

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page