Skip to main content

Tool-agnostic validation framework for the Manifest-driven AI Development (MAID) methodology. Validates that code artifacts align with declarative manifests.

Project description

MAID Runner

A tool-agnostic validation framework for the Manifest-driven AI Development (MAID) methodology. MAID Runner validates that code artifacts align with their declarative manifests, ensuring architectural integrity in AI-assisted development.

Architecture Philosophy

MAID Runner is a validation-only tool. It does NOT create files, generate code, or automate development. Instead, it validates that manifests, tests, and implementations comply with MAID methodology.

┌──────────────────────────────────────┐
│   External Tools (Your Choice)       │
│   - Claude Code / Aider / Cursor     │
│   - Custom AI agents                 │
│   - Manual (human developers)        │
│                                      │
│   Responsibilities:                  │
│   ✓ Create manifests                 │
│   ✓ Generate behavioral tests        │
│   ✓ Implement code                   │
│   ✓ Orchestrate workflow             │
└──────────────────────────────────────┘
              │
              │ Creates files
              ▼
┌──────────────────────────────────────┐
│   MAID Runner (Validation Only)      │
│                                      │
│   Responsibilities:                  │
│   ✓ Validate manifest schema         │
│   ✓ Validate behavioral tests        │
│   ✓ Validate implementation          │
│   ✓ Validate type hints              │
│   ✓ Validate manifest chain          │
│   ✓ Track file compliance            │
│                                      │
│   ✗ No file creation                 │
│   ✗ No code generation               │
│   ✗ Tool-agnostic design             │
└──────────────────────────────────────┘

Installation

Local Development (Editable Install)

For local development, install the package in editable mode:

# Using pip
pip install -e .

# Using uv (recommended)
uv pip install -e .

After installation, the maid command will be available:

# Check version
maid --version

# Get help
maid --help

Python API

You can also use MAID Runner as a Python library:

from maid_runner import (
    validate_schema,
    validate_with_ast,
    discover_related_manifests,
    generate_snapshot,
    AlignmentError,
    __version__,
)

# Validate a manifest schema
validate_schema(manifest_data, schema_path)

# Validate implementation against manifest
validate_with_ast(manifest_data, file_path, use_manifest_chain=True)

# Generate snapshot manifest
generate_snapshot("path/to/file.py", output_dir="manifests")

Core CLI Tools (For External Tools)

1. Manifest Validation

# Validate manifest structure and implementation
maid validate <manifest_path> [options]

# Options:
#   --validation-mode {implementation,behavioral}  # Default: implementation
#   --use-manifest-chain                          # Merge related manifests
#   --quiet, -q                                    # Suppress success messages

# Exit Codes:
#   0 = Validation passed
#   1 = Validation failed

Examples:

# Validate implementation matches manifest
$ maid validate manifests/task-013.manifest.json
✓ Validation PASSED

# Validate behavioral tests USE artifacts
$ maid validate manifests/task-013.manifest.json --validation-mode behavioral
✓ Behavioral test validation PASSED

# Full validation with manifest chain (recommended)
$ maid validate manifests/task-013.manifest.json --use-manifest-chain
✓ Validation PASSED

# Quiet mode for automation
$ maid validate manifests/task-013.manifest.json --quiet
# Exit code 0 = success, no output

File Tracking Analysis:

When using --use-manifest-chain in implementation mode, MAID Runner performs automatic file tracking analysis to detect files not properly tracked in manifests:

$ maid validate manifests/task-013.manifest.json --use-manifest-chain

✓ Validation PASSED

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
FILE TRACKING ANALYSIS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

🔴 UNDECLARED FILES (3 files)
  Files exist in codebase but are not tracked in any manifest

  - scripts/helper.py
     Not found in any manifest

  Action: Add these files to creatableFiles or editableFiles

🟡 REGISTERED FILES (5 files)
  Files are tracked but not fully MAID-compliant

  - utils/config.py
    ⚠️  In editableFiles but no expectedArtifacts
    Manifests: task-010

  Action: Add expectedArtifacts and validationCommand

✓ TRACKED (42 files)
  All other source files are fully MAID-compliant

Summary: 3 UNDECLARED, 5 REGISTERED, 42 TRACKED

File Status Levels:

  • 🔴 UNDECLARED: Files not in any manifest (high priority) - no audit trail
  • 🟡 REGISTERED: Files tracked but incomplete compliance (medium priority) - missing artifacts/tests
  • ✓ TRACKED: Files with full MAID compliance - properly documented and tested

This progressive compliance system helps teams migrate existing codebases to MAID while clearly identifying accountability gaps.

2. Snapshot Generation

# Generate snapshot manifest from existing code
maid snapshot <file_path> [options]

# Options:
#   --output-dir DIR    # Default: manifests/
#   --force            # Overwrite without prompting

# Exit Codes:
#   0 = Snapshot created
#   1 = Error

Example:

$ maid snapshot maid_runner/validators/manifest_validator.py --force
Snapshot manifest generated successfully: manifests/task-009-snapshot-manifest_validator.manifest.json

3. System-Wide Snapshot

# Generate system-wide manifest aggregating all active manifests
maid snapshot-system [options]

# Options:
#   --output FILE           # Default: system.manifest.json
#   --manifest-dir DIR      # Default: manifests/
#   --quiet, -q            # Suppress informational output

# Exit Codes:
#   0 = Snapshot created
#   1 = Error

Example:

$ maid snapshot-system --output system.manifest.json
Discovered 48 active manifests (excluding 12 superseded)
Aggregated 16 files with artifacts
Deduplicated 54 validation commands

System manifest generated: system.manifest.json

Use Cases:

  • Knowledge Graph Construction: Aggregate all artifacts for system-wide analysis
  • Documentation Generation: Create comprehensive artifact catalog
  • Migration Support: Generate baseline snapshot when adopting MAID for existing projects
  • System Validation: Validate that generated system manifest is schema-compliant

4. List Manifests by File

# List all manifests that reference a file
maid manifests <file_path> [options]

# Options:
#   --manifest-dir DIR  # Default: manifests/
#   --quiet, -q         # Show minimal output (just manifest names)

# Exit Codes:
#   0 = Success (found or not found)

Examples:

# Find which manifests reference a file
$ maid manifests maid_runner/cli/main.py

Manifests referencing: maid_runner/cli/main.py
Total: 2 manifest(s)

================================================================================

✏️  EDITED BY (2 manifest(s)):
  - task-021-maid-test-command.manifest.json
  - task-029-list-manifests-command.manifest.json

================================================================================

# Quiet mode for scripting
$ maid manifests maid_runner/validators/manifest_validator.py --quiet
created: task-001-add-schema-validation.manifest.json
edited: task-002-add-ast-alignment-validation.manifest.json
edited: task-003-behavioral-validation.manifest.json
read: task-008-snapshot-generator.manifest.json

Use Cases:

  • Dependency Analysis: Find which tasks touched a file
  • Impact Assessment: Understand file's role in the project (created vs edited vs read)
  • Manifest Discovery: Quickly locate relevant manifests when investigating code
  • Audit Trail: See the complete history of changes to a file through manifests

Optional Human Helper Tools

For manual/interactive use, MAID Runner includes convenience wrappers in examples/maid_runner.py:

# Interactive manifest creation (optional helper)
python examples/maid_runner.py plan --goal "Add user authentication"

# Interactive validation loop (optional helper)
python examples/maid_runner.py run manifests/task-013.manifest.json

These are NOT required for automation. External AI tools should use maid validate directly.

Integration with AI Tools

Python Integration Example

import subprocess
import json
from pathlib import Path

def validate_manifest(manifest_path: str) -> dict:
    """Use MAID Runner to validate manifest."""
    result = subprocess.run(
        ["maid", "validate", manifest_path,
         "--use-manifest-chain", "--quiet"],
        capture_output=True,
        text=True
    )

    return {
        "success": result.returncode == 0,
        "errors": result.stderr if result.returncode != 0 else None
    }

# AI tool creates manifest
manifest_path = Path("manifests/task-013-email-validation.manifest.json")
manifest_path.write_text(json.dumps({
    "goal": "Add email validation",
    "taskType": "create",
    "creatableFiles": ["validators/email_validator.py"],
    "readonlyFiles": ["tests/test_email_validation.py"],
    "expectedArtifacts": {
        "file": "validators/email_validator.py",
        "contains": [
            {"type": "class", "name": "EmailValidator"},
            {"type": "function", "name": "validate", "class": "EmailValidator"}
        ]
    },
    "validationCommand": ["pytest", "tests/test_email_validation.py", "-v"]
    // Enhanced format also supported:
    // "validationCommands": [
    //   ["pytest", "tests/test_email_validation.py", "-v"],
    //   ["mypy", "validators/email_validator.py"]
    // ]
}, indent=2))

# AI tool generates tests...
# AI tool implements code...

# Validate with MAID Runner
result = validate_manifest(str(manifest_path))
if result["success"]:
    print("✓ Validation passed - ready to commit")
else:
    print(f"✗ Validation failed: {result['errors']}")

Shell Integration Example

#!/bin/bash
# AI tool workflow script

MANIFEST="manifests/task-013-email-validation.manifest.json"

# AI creates manifest (not MAID Runner's job)
cat > $MANIFEST <<EOF
{
  "goal": "Add email validation",
  "taskType": "create",
  "creatableFiles": ["validators/email_validator.py"],
  "readonlyFiles": ["tests/test_email_validation.py"],
  "expectedArtifacts": {...},
  "validationCommand": ["pytest", "tests/test_email_validation.py", "-v"]
}
EOF

# AI generates tests...
# AI implements code...

# Validate with MAID Runner
if maid validate $MANIFEST --use-manifest-chain --quiet; then
    echo "✓ Validation passed"
    exit 0
else
    echo "✗ Validation failed"
    exit 1
fi

What MAID Runner Validates

Validation Type What It Checks Command
Schema Manifest JSON structure maid validate
Behavioral Tests Tests USE declared artifacts maid validate --validation-mode behavioral
Implementation Code DEFINES declared artifacts maid validate (default)
Type Hints Type annotations match manifest maid validate (automatic)
Manifest Chain Historical consistency maid validate --use-manifest-chain
File References Which manifests touch a file maid manifests <file_path>

Development Setup

This project uses uv for dependency management.

# Install dependencies
uv sync

# Install development dependencies
uv sync --group dev

# Install package in editable mode (after initial setup)
uv pip install -e .

Manifest Structure

Task manifests define isolated units of work with explicit inputs, outputs, and validation criteria:

{
  "goal": "Implement email validation",
  "taskType": "create",
  "supersedes": [],
  "creatableFiles": ["validators/email_validator.py"],
  "editableFiles": [],
  "readonlyFiles": ["tests/test_email_validation.py"],
  "expectedArtifacts": {
    "file": "validators/email_validator.py",
    "contains": [
      {
        "type": "class",
        "name": "EmailValidator"
      },
      {
        "type": "function",
        "name": "validate",
        "class": "EmailValidator",
        "parameters": [
          {"name": "email", "type": "str"}
        ],
        "returns": "bool"
      }
    ]
  },
  "validationCommand": ["pytest", "tests/test_email_validation.py", "-v"]
}

Validation Modes

Strict Mode (creatableFiles):

  • Implementation must EXACTLY match expectedArtifacts
  • No extra public artifacts allowed
  • Perfect for new files

Permissive Mode (editableFiles):

  • Implementation must CONTAIN expectedArtifacts
  • Extra public artifacts allowed
  • Perfect for editing existing files

Supported Artifact Types

  • Classes: {"type": "class", "name": "ClassName", "bases": ["BaseClass"]}
  • Functions: {"type": "function", "name": "function_name", "parameters": [...]}
  • Methods: {"type": "function", "name": "method_name", "class": "ParentClass", "parameters": [...]}
  • Attributes: {"type": "attribute", "name": "attr_name", "class": "ParentClass"}

MAID Methodology

This project implements the MAID (Manifest-driven AI Development) methodology, which promotes:

  • Explicitness over Implicitness: All AI agent context is explicitly defined
  • Extreme Isolation: Tasks are isolated from the wider codebase during creation
  • Test-Driven Validation: The manifest is the primary contract; tests support implementation
  • Directed Dependency: One-way dependency flow following Clean Architecture
  • Verifiable Chronology: Current state results from sequential manifest application

For detailed methodology documentation, see docs/maid_specs.md.

Development Workflow (Manual or AI-Assisted)

Phase 1: Goal Definition

Define the high-level feature or bug fix.

Phase 2: Planning Loop

  1. Create manifest (JSON file defining the task)
  2. Create behavioral tests (tests that USE the expected artifacts)
  3. Validate structure: maid validate <manifest> --validation-mode behavioral
  4. Iterate until structural validation passes
  5. Commit manifest and tests

Phase 3: Implementation Loop

  1. Implement code (create/modify files per manifest)
  2. Validate implementation: maid validate <manifest> --use-manifest-chain
  3. Run tests: Execute validationCommand from manifest
  4. Iterate until all tests pass
  5. Commit implementation

Phase 4: Integration

Verify complete chain: All manifests validate successfully.

Testing

# Run all tests
uv run python -m pytest tests/ -v

# Run validation tests
uv run python -m pytest tests/test_manifest_to_implementation_alignment.py -v

# Run specific task tests
uv run python -m pytest tests/test_task_011_implementation_loop_controller.py -v

Code Quality

# Format code
make format  # or: uv run black .

# Lint code
make lint    # or: uv run ruff check .

# Type check
make type-check

Project Structure

maid-runner/
├── docs/                          # Documentation and specifications
├── manifests/                     # Task manifest files (chronological)
├── tests/                         # Test suite
├── maid_runner/                   # Main package
│   ├── __init__.py                # Package exports
│   ├── __version__.py             # Version information
│   ├── cli/                        # CLI modules
│   │   ├── main.py                # Main CLI entry point (maid command)
│   │   ├── validate.py            # Validate subcommand
│   │   ├── snapshot.py            # Snapshot subcommand
│   │   ├── list_manifests.py      # Manifests subcommand
│   │   └── test.py                # Test subcommand
│   └── validators/                # Core validation logic
│       ├── manifest_validator.py  # Main validation engine
│       ├── type_validator.py      # Type hint validation
│       ├── file_tracker.py        # File tracking analysis
│       └── schemas/               # JSON schemas
├── examples/                      # Example scripts
│   └── maid_runner.py             # Optional helpers (plan/run)
└── .claude/                       # Claude Code configuration

Core Components

  • Manifest Validator (validators/manifest_validator.py) - Schema and AST-based validation engine
  • Type Validator (validators/type_validator.py) - Type hint validation
  • Manifest Schema (validators/schemas/manifest.schema.json) - JSON schema defining manifest structure
  • Task Manifests (manifests/) - Chronologically ordered task definitions

Requirements

  • Python 3.12+
  • Dependencies managed via uv
  • Core dependencies: jsonschema, pytest
  • Development dependencies: black, ruff, mypy

Exit Codes for Automation

All validation commands use standard exit codes:

  • 0 = Success (validation passed)
  • 1 = Failure (validation failed or error occurred)

Use --quiet flag to suppress success messages for clean automation.

Contributing

This project dogfoods the MAID methodology. All changes must:

  1. Have a manifest in manifests/
  2. Have behavioral tests in tests/
  3. Pass structural validation
  4. Pass behavioral tests

See CLAUDE.md for development guidelines.

License

This project implements the MAID methodology for research and development purposes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

maid_runner-0.1.0.tar.gz (158.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

maid_runner-0.1.0-py3-none-any.whl (74.6 kB view details)

Uploaded Python 3

File details

Details for the file maid_runner-0.1.0.tar.gz.

File metadata

  • Download URL: maid_runner-0.1.0.tar.gz
  • Upload date:
  • Size: 158.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for maid_runner-0.1.0.tar.gz
Algorithm Hash digest
SHA256 cce154dc0370d6cd5572b96a3fe59e92db3645fdc72e16ea2c0fcba112033aa6
MD5 f18b04c1b6e796cc2b1a2a5d1d8319c0
BLAKE2b-256 8673ac9cc539379d7bcfc20ad72322c4c42c952fc589d167683c6eb8cc72b9b4

See more details on using hashes here.

File details

Details for the file maid_runner-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: maid_runner-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 74.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for maid_runner-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3e08d2d8ee676586c709ce2db05fd98c0dd0cf469cb605d5bad9ba3d89cbdf1d
MD5 4d2b1ff4d53c992259af7631ab910ea9
BLAKE2b-256 f4a682f71060e6f8f5fc80874e4e7df41def7a5ddc03cb43ff5c3986fa811ead

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page