Skip to main content

Tool-agnostic validation framework for the Manifest-driven AI Development (MAID) methodology. Validates that code artifacts align with declarative manifests.

Project description

MAID Runner

PyPI version Python Version License: MIT

A tool-agnostic validation framework for the Manifest-driven AI Development (MAID) methodology. MAID Runner validates that code artifacts align with their declarative manifests, ensuring architectural integrity in AI-assisted development.

Architecture Philosophy

MAID Runner is a validation-only tool. It does NOT create files, generate code, or automate development. Instead, it validates that manifests, tests, and implementations comply with MAID methodology.

┌──────────────────────────────────────┐
│   External Tools (Your Choice)       │
│   - Claude Code / Aider / Cursor     │
│   - Custom AI agents                 │
│   - Manual (human developers)        │
│                                      │
│   Responsibilities:                  │
│   ✓ Create manifests                 │
│   ✓ Generate behavioral tests        │
│   ✓ Implement code                   │
│   ✓ Orchestrate workflow             │
└──────────────────────────────────────┘
              │
              │ Creates files
              ▼
┌──────────────────────────────────────┐
│   MAID Runner (Validation Only)      │
│                                      │
│   Responsibilities:                  │
│   ✓ Validate manifest schema         │
│   ✓ Validate behavioral tests        │
│   ✓ Validate implementation          │
│   ✓ Validate type hints              │
│   ✓ Validate manifest chain          │
│   ✓ Track file compliance            │
│                                      │
│   ✗ No file creation                 │
│   ✗ No code generation               │
│   ✗ Tool-agnostic design             │
└──────────────────────────────────────┘

Installation

From PyPI (Recommended)

Install MAID Runner from PyPI using pip or uv:

# Using pip
pip install maid-runner

# Using uv (recommended)
uv pip install maid-runner

Local Development (Editable Install)

For local development, clone the repository and install in editable mode:

# Using pip
pip install -e .

# Using uv (recommended)
uv pip install -e .

After installation, the maid command will be available:

# Check version
maid --version

# Get help
maid --help

Python API

You can also use MAID Runner as a Python library:

from maid_runner import (
    validate_schema,
    validate_with_ast,
    discover_related_manifests,
    generate_snapshot,
    AlignmentError,
    __version__,
)

# Validate a manifest schema
validate_schema(manifest_data, schema_path)

# Validate implementation against manifest
validate_with_ast(manifest_data, file_path, use_manifest_chain=True)

# Generate snapshot manifest
generate_snapshot("path/to/file.py", output_dir="manifests")

Core CLI Tools (For External Tools)

1. Manifest Validation

# Validate manifest structure and implementation
maid validate <manifest_path> [options]

# Options:
#   --validation-mode {implementation,behavioral}  # Default: implementation
#   --use-manifest-chain                          # Merge related manifests
#   --quiet, -q                                    # Suppress success messages

# Exit Codes:
#   0 = Validation passed
#   1 = Validation failed

Examples:

# Validate implementation matches manifest
$ maid validate manifests/task-013.manifest.json
✓ Validation PASSED

# Validate behavioral tests USE artifacts
$ maid validate manifests/task-013.manifest.json --validation-mode behavioral
✓ Behavioral test validation PASSED

# Full validation with manifest chain (recommended)
$ maid validate manifests/task-013.manifest.json --use-manifest-chain
✓ Validation PASSED

# Quiet mode for automation
$ maid validate manifests/task-013.manifest.json --quiet
# Exit code 0 = success, no output

File Tracking Analysis:

When using --use-manifest-chain in implementation mode, MAID Runner performs automatic file tracking analysis to detect files not properly tracked in manifests:

$ maid validate manifests/task-013.manifest.json --use-manifest-chain

✓ Validation PASSED

━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
FILE TRACKING ANALYSIS
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━

🔴 UNDECLARED FILES (3 files)
  Files exist in codebase but are not tracked in any manifest

  - scripts/helper.py
     Not found in any manifest

  Action: Add these files to creatableFiles or editableFiles

🟡 REGISTERED FILES (5 files)
  Files are tracked but not fully MAID-compliant

  - utils/config.py
    ⚠️  In editableFiles but no expectedArtifacts
    Manifests: task-010

  Action: Add expectedArtifacts and validationCommand

✓ TRACKED (42 files)
  All other source files are fully MAID-compliant

Summary: 3 UNDECLARED, 5 REGISTERED, 42 TRACKED

File Status Levels:

  • 🔴 UNDECLARED: Files not in any manifest (high priority) - no audit trail
  • 🟡 REGISTERED: Files tracked but incomplete compliance (medium priority) - missing artifacts/tests
  • ✓ TRACKED: Files with full MAID compliance - properly documented and tested

This progressive compliance system helps teams migrate existing codebases to MAID while clearly identifying accountability gaps.

2. Snapshot Generation

# Generate snapshot manifest from existing code
maid snapshot <file_path> [options]

# Options:
#   --output-dir DIR    # Default: manifests/
#   --force            # Overwrite without prompting

# Exit Codes:
#   0 = Snapshot created
#   1 = Error

Example:

$ maid snapshot maid_runner/validators/manifest_validator.py --force
Snapshot manifest generated successfully: manifests/task-009-snapshot-manifest_validator.manifest.json

3. System-Wide Snapshot

# Generate system-wide manifest aggregating all active manifests
maid snapshot-system [options]

# Options:
#   --output FILE           # Default: system.manifest.json
#   --manifest-dir DIR      # Default: manifests/
#   --quiet, -q            # Suppress informational output

# Exit Codes:
#   0 = Snapshot created
#   1 = Error

Example:

$ maid snapshot-system --output system.manifest.json
Discovered 48 active manifests (excluding 12 superseded)
Aggregated 16 files with artifacts
Deduplicated 54 validation commands

System manifest generated: system.manifest.json

Use Cases:

  • Knowledge Graph Construction: Aggregate all artifacts for system-wide analysis
  • Documentation Generation: Create comprehensive artifact catalog
  • Migration Support: Generate baseline snapshot when adopting MAID for existing projects
  • System Validation: Validate that generated system manifest is schema-compliant

4. List Manifests by File

# List all manifests that reference a file
maid manifests <file_path> [options]

# Options:
#   --manifest-dir DIR  # Default: manifests/
#   --quiet, -q         # Show minimal output (just manifest names)

# Exit Codes:
#   0 = Success (found or not found)

Examples:

# Find which manifests reference a file
$ maid manifests maid_runner/cli/main.py

Manifests referencing: maid_runner/cli/main.py
Total: 2 manifest(s)

================================================================================

✏️  EDITED BY (2 manifest(s)):
  - task-021-maid-test-command.manifest.json
  - task-029-list-manifests-command.manifest.json

================================================================================

# Quiet mode for scripting
$ maid manifests maid_runner/validators/manifest_validator.py --quiet
created: task-001-add-schema-validation.manifest.json
edited: task-002-add-ast-alignment-validation.manifest.json
edited: task-003-behavioral-validation.manifest.json
read: task-008-snapshot-generator.manifest.json

Use Cases:

  • Dependency Analysis: Find which tasks touched a file
  • Impact Assessment: Understand file's role in the project (created vs edited vs read)
  • Manifest Discovery: Quickly locate relevant manifests when investigating code
  • Audit Trail: See the complete history of changes to a file through manifests

Optional Human Helper Tools

For manual/interactive use, MAID Runner includes convenience wrappers in examples/maid_runner.py:

# Interactive manifest creation (optional helper)
python examples/maid_runner.py plan --goal "Add user authentication"

# Interactive validation loop (optional helper)
python examples/maid_runner.py run manifests/task-013.manifest.json

These are NOT required for automation. External AI tools should use maid validate directly.

Integration with AI Tools

Python Integration Example

import subprocess
import json
from pathlib import Path

def validate_manifest(manifest_path: str) -> dict:
    """Use MAID Runner to validate manifest."""
    result = subprocess.run(
        ["maid", "validate", manifest_path,
         "--use-manifest-chain", "--quiet"],
        capture_output=True,
        text=True
    )

    return {
        "success": result.returncode == 0,
        "errors": result.stderr if result.returncode != 0 else None
    }

# AI tool creates manifest
manifest_path = Path("manifests/task-013-email-validation.manifest.json")
manifest_path.write_text(json.dumps({
    "goal": "Add email validation",
    "taskType": "create",
    "creatableFiles": ["validators/email_validator.py"],
    "readonlyFiles": ["tests/test_email_validation.py"],
    "expectedArtifacts": {
        "file": "validators/email_validator.py",
        "contains": [
            {"type": "class", "name": "EmailValidator"},
            {"type": "function", "name": "validate", "class": "EmailValidator"}
        ]
    },
    "validationCommand": ["pytest", "tests/test_email_validation.py", "-v"]
    // Enhanced format also supported:
    // "validationCommands": [
    //   ["pytest", "tests/test_email_validation.py", "-v"],
    //   ["mypy", "validators/email_validator.py"]
    // ]
}, indent=2))

# AI tool generates tests...
# AI tool implements code...

# Validate with MAID Runner
result = validate_manifest(str(manifest_path))
if result["success"]:
    print("✓ Validation passed - ready to commit")
else:
    print(f"✗ Validation failed: {result['errors']}")

Shell Integration Example

#!/bin/bash
# AI tool workflow script

MANIFEST="manifests/task-013-email-validation.manifest.json"

# AI creates manifest (not MAID Runner's job)
cat > $MANIFEST <<EOF
{
  "goal": "Add email validation",
  "taskType": "create",
  "creatableFiles": ["validators/email_validator.py"],
  "readonlyFiles": ["tests/test_email_validation.py"],
  "expectedArtifacts": {...},
  "validationCommand": ["pytest", "tests/test_email_validation.py", "-v"]
}
EOF

# AI generates tests...
# AI implements code...

# Validate with MAID Runner
if maid validate $MANIFEST --use-manifest-chain --quiet; then
    echo "✓ Validation passed"
    exit 0
else
    echo "✗ Validation failed"
    exit 1
fi

What MAID Runner Validates

Validation Type What It Checks Command
Schema Manifest JSON structure maid validate
Behavioral Tests Tests USE declared artifacts maid validate --validation-mode behavioral
Implementation Code DEFINES declared artifacts maid validate (default)
Type Hints Type annotations match manifest maid validate (automatic)
Manifest Chain Historical consistency maid validate --use-manifest-chain
File References Which manifests touch a file maid manifests <file_path>

Development Setup

This project uses uv for dependency management.

# Install dependencies
uv sync

# Install development dependencies
uv sync --group dev

# Install package in editable mode (after initial setup)
uv pip install -e .

Manifest Structure

Task manifests define isolated units of work with explicit inputs, outputs, and validation criteria:

{
  "goal": "Implement email validation",
  "taskType": "create",
  "supersedes": [],
  "creatableFiles": ["validators/email_validator.py"],
  "editableFiles": [],
  "readonlyFiles": ["tests/test_email_validation.py"],
  "expectedArtifacts": {
    "file": "validators/email_validator.py",
    "contains": [
      {
        "type": "class",
        "name": "EmailValidator"
      },
      {
        "type": "function",
        "name": "validate",
        "class": "EmailValidator",
        "parameters": [
          {"name": "email", "type": "str"}
        ],
        "returns": "bool"
      }
    ]
  },
  "validationCommand": ["pytest", "tests/test_email_validation.py", "-v"]
}

Validation Modes

Strict Mode (creatableFiles):

  • Implementation must EXACTLY match expectedArtifacts
  • No extra public artifacts allowed
  • Perfect for new files

Permissive Mode (editableFiles):

  • Implementation must CONTAIN expectedArtifacts
  • Extra public artifacts allowed
  • Perfect for editing existing files

Supported Artifact Types

  • Classes: {"type": "class", "name": "ClassName", "bases": ["BaseClass"]}
  • Functions: {"type": "function", "name": "function_name", "parameters": [...]}
  • Methods: {"type": "function", "name": "method_name", "class": "ParentClass", "parameters": [...]}
  • Attributes: {"type": "attribute", "name": "attr_name", "class": "ParentClass"}

MAID Methodology

This project implements the MAID (Manifest-driven AI Development) methodology, which promotes:

  • Explicitness over Implicitness: All AI agent context is explicitly defined
  • Extreme Isolation: Tasks are isolated from the wider codebase during creation
  • Test-Driven Validation: The manifest is the primary contract; tests support implementation
  • Directed Dependency: One-way dependency flow following Clean Architecture
  • Verifiable Chronology: Current state results from sequential manifest application

For detailed methodology documentation, see docs/maid_specs.md.

Development Workflow (Manual or AI-Assisted)

Phase 1: Goal Definition

Define the high-level feature or bug fix.

Phase 2: Planning Loop

  1. Create manifest (JSON file defining the task)
  2. Create behavioral tests (tests that USE the expected artifacts)
  3. Validate structure: maid validate <manifest> --validation-mode behavioral
  4. Iterate until structural validation passes
  5. Commit manifest and tests

Phase 3: Implementation Loop

  1. Implement code (create/modify files per manifest)
  2. Validate implementation: maid validate <manifest> --use-manifest-chain
  3. Run tests: Execute validationCommand from manifest
  4. Iterate until all tests pass
  5. Commit implementation

Phase 4: Integration

Verify complete chain: All manifests validate successfully.

Testing

# Run all tests
uv run python -m pytest tests/ -v

# Run validation tests
uv run python -m pytest tests/test_manifest_to_implementation_alignment.py -v

# Run specific task tests
uv run python -m pytest tests/test_task_011_implementation_loop_controller.py -v

Code Quality

# Format code
make format  # or: uv run black .

# Lint code
make lint    # or: uv run ruff check .

# Type check
make type-check

Project Structure

maid-runner/
├── docs/                          # Documentation and specifications
├── manifests/                     # Task manifest files (chronological)
├── tests/                         # Test suite
├── maid_runner/                   # Main package
│   ├── __init__.py                # Package exports
│   ├── __version__.py             # Version information
│   ├── cli/                        # CLI modules
│   │   ├── main.py                # Main CLI entry point (maid command)
│   │   ├── validate.py            # Validate subcommand
│   │   ├── snapshot.py            # Snapshot subcommand
│   │   ├── list_manifests.py      # Manifests subcommand
│   │   └── test.py                # Test subcommand
│   └── validators/                # Core validation logic
│       ├── manifest_validator.py  # Main validation engine
│       ├── type_validator.py      # Type hint validation
│       ├── file_tracker.py        # File tracking analysis
│       └── schemas/               # JSON schemas
├── examples/                      # Example scripts
│   └── maid_runner.py             # Optional helpers (plan/run)
└── .claude/                       # Claude Code configuration

Core Components

  • Manifest Validator (validators/manifest_validator.py) - Schema and AST-based validation engine
  • Type Validator (validators/type_validator.py) - Type hint validation
  • Manifest Schema (validators/schemas/manifest.schema.json) - JSON schema defining manifest structure
  • Task Manifests (manifests/) - Chronologically ordered task definitions

Requirements

  • Python 3.12+
  • Dependencies managed via uv
  • Core dependencies: jsonschema, pytest
  • Development dependencies: black, ruff, mypy

Exit Codes for Automation

All validation commands use standard exit codes:

  • 0 = Success (validation passed)
  • 1 = Failure (validation failed or error occurred)

Use --quiet flag to suppress success messages for clean automation.

Contributing

This project dogfoods the MAID methodology. All changes must:

  1. Have a manifest in manifests/
  2. Have behavioral tests in tests/
  3. Pass structural validation
  4. Pass behavioral tests

See CLAUDE.md for development guidelines.

License

This project implements the MAID methodology for research and development purposes.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

maid_runner-0.1.1.tar.gz (163.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

maid_runner-0.1.1-py3-none-any.whl (79.5 kB view details)

Uploaded Python 3

File details

Details for the file maid_runner-0.1.1.tar.gz.

File metadata

  • Download URL: maid_runner-0.1.1.tar.gz
  • Upload date:
  • Size: 163.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for maid_runner-0.1.1.tar.gz
Algorithm Hash digest
SHA256 04ecba57d4ff56f7b0ec27ed878bfd100a0b6042f017c06f27d4ef6ebe2820da
MD5 5135f30d4e59e943285cc0df2abe1dc8
BLAKE2b-256 a2f2f80ea4c63bed65ea98dfa4a5326669add535b84a9a41dca5bd3f2df27ad8

See more details on using hashes here.

File details

Details for the file maid_runner-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: maid_runner-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 79.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.8

File hashes

Hashes for maid_runner-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 c0ec96caad858938129f021aa8ba222f0a765e7b04f70be46b46b427e5295498
MD5 0afeadd6df5ac2df9eca8966b3d860e2
BLAKE2b-256 de14e45ea63a4e1a4bfe27db372afaa995c54afc542012e9be68988ffef274ec

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page