Skip to main content

AI agent conversation drift analyzer - identifies gaps between what AI agents did and what users wanted

Project description

Drift

Quality assurance for AI-augmented codebases - validates that your project follows best practices for effective AI agent collaboration.

What It Does

Drift ensures your AI-augmented development environment is optimized for productivity. It validates both the quality of AI interactions and the health of your project configuration.

Think of it as: A comprehensive testing and linting tool for AI-first development - catching issues in conversation patterns, documentation quality, dependency management, and project structure.

Two-Level Validation

1. Conversation Quality Analysis Analyzes AI agent conversation logs to detect patterns where work diverged from user intent:

  • Incomplete work and premature task abandonment
  • Missed delegation opportunities to specialized agents
  • Ignored skills, commands, or workflow automation
  • Deviation from documented project guidelines

2. Project Structure Validation Programmatically validates your AI collaboration setup:

  • Dependency health: Detects redundant transitive dependencies in commands, skills, and agents
  • Link integrity: Validates all file references and resource links in documentation
  • Completeness checks: Ensures skills, commands, and agents have required structure
  • Consistency validation: Detects contradictions between commands and project guidelines
  • Required files: Verifies essential configuration files exist (e.g., CLAUDE.md)

Key Features

  • Multi-layered analysis: Combines LLM-based conversation analysis with fast programmatic validation
  • Project-aware: Automatically discovers and validates commands, skills, agents, and MCP servers
  • Flexible execution: Run all checks, or use --no-llm for fast programmatic-only validation
  • Multi-provider: Anthropic API and AWS Bedrock with Claude models (Sonnet, Haiku)
  • Multi-agent support: Currently supports Claude Code
  • Rich output: Markdown with colors (for terminals) or structured JSON
  • Configurable rules: Extensible YAML-based rule system for custom validations

Installation

# Install from PyPI
pip install ai-drift

# Or with uv
uv pip install ai-drift

For development:

# Clone repository
git clone https://github.com/jarosser06/drift.git
cd drift
uv pip install -e ".[dev]"

Quick Start

Provider Setup

Option 1: Anthropic API (Recommended)

export ANTHROPIC_API_KEY=your-api-key

Option 2: AWS Bedrock

aws configure

Usage

# Run full analysis on latest conversation
drift

# Fast programmatic-only validation (no LLM calls)
drift --no-llm

# Analyze last 7 days with JSON output
drift --days 7 --format json

# Check specific rules only
drift --rules command_broken_links,skill_duplicate_dependencies

# Use different model for analysis
drift --model sonnet

# Disable caching for fresh analysis
drift --no-cache

# Use custom cache directory
drift --cache-dir /tmp/my-cache

Response Caching

Drift automatically caches LLM responses to reduce API costs and speed up re-analysis:

  • Smart invalidation: Cache automatically invalidates when file content changes
  • Content-based: Uses SHA-256 hashing to detect changes
  • TTL support: Default 24-hour cache expiration (configurable)
  • Per-file caching: Each file + rule combination cached separately

Configure in .drift.yaml:

cache_enabled: true          # Enable/disable caching (default: true)
cache_dir: .drift/cache      # Cache directory (default: .drift/cache)
cache_ttl: 86400             # TTL in seconds (default: 86400 = 24 hours)

CLI overrides:

  • --no-cache: Disable caching for this run
  • --cache-dir <path>: Use custom cache directory
  • --no-parallel: Disable parallel execution of validation rules (use sequential execution)

Example Output

# Drift Analysis Results

## Summary
- Total conversations: 3
- Total rule violations: 5
- Rules checked: 12
- Rules passed: 7
- Rules warned: 2
- Rules failed: 3

## Rules Passed ✓

- **documentation_gap**: No issues found
- **command_broken_links**: All file references valid
- **skill_duplicate_dependencies**: No redundant dependencies
- **claude_md_missing**: CLAUDE.md exists

## Failures

### command_duplicate_dependencies

*Commands should only declare direct dependencies. Transitive dependencies are automatically included.*

**Bundle:** create-pr command
**Files:** .claude/commands/create-pr.md
**Issue:** Command declares both `pr-writing` skill and `github-operations` skill, but `pr-writing` already depends on `github-operations`
**Expected:** Remove `github-operations` from command dependencies

### skill_completeness

*Incomplete skills create confusion and slow development. Skills must be self-contained with clear examples.*

**Bundle:** testing skill
**Files:** .claude/skills/testing/SKILL.md
**Issue:** Skill references `./examples/test_example.py` which doesn't exist
**Expected:** Include referenced examples or remove broken references

## Warnings

### incomplete_work

*AI stopping before completing full scope wastes user time and breaks workflow momentum.*

**Session:** abc-123
**Agent Tool:** claude-code
**Turn:** 3
**Observed:** Implemented login form without validation
**Expected:** Complete login system with validation and error handling
**Context:** User had to explicitly request validation in next turn

Configuration

Default config auto-generates at ~/.config/drift/config.yaml. Override per-project with .drift.yaml.

Provider Configuration

Configure LLM providers in .drift.yaml:

Anthropic API:

providers:
  anthropic:
    provider: anthropic
    params:
      api_key_env: ANTHROPIC_API_KEY  # optional, defaults to ANTHROPIC_API_KEY

models:
  claude-sonnet:
    provider: anthropic
    model_id: claude-sonnet-4-5-20250929
    params:
      max_tokens: 4096
      temperature: 0.0
  claude-haiku:
    provider: anthropic
    model_id: claude-haiku-4-5-20251001
    params:
      max_tokens: 4096
      temperature: 0.0

AWS Bedrock:

providers:
  bedrock:
    provider: bedrock
    params:
      region: us-east-1

models:
  sonnet:
    provider: bedrock
    model_id: us.anthropic.claude-3-5-sonnet-20240620-v1:0
    params:
      max_tokens: 4096
      temperature: 0.0

Validation Categories

Conversation Analysis Rules (LLM-based)

Analyze AI agent conversation patterns:

  • incomplete_work - AI stopped before completing full scope
  • agent_delegation_miss - AI did work manually instead of using agents
  • skill_ignored - AI didn't use available skills
  • workflow_bypass - User manually executed steps that commands automate
  • prescriptive_deviation - AI ignored explicit workflow documentation
  • no_agents_configured - Project lacks agent definitions

Project Validation Rules (Programmatic)

Fast validation without LLM calls:

  • command_duplicate_dependencies - Redundant transitive skill dependencies in commands
  • skill_duplicate_dependencies - Redundant transitive dependencies in skills
  • agent_duplicate_dependencies - Redundant transitive dependencies in agents
  • command_broken_links - Broken file references in command documentation
  • skill_broken_links - Broken file references in skill documentation
  • agent_broken_links - Broken file references in agent documentation
  • claude_md_missing - Missing CLAUDE.md configuration file
  • skill_completeness - Skills missing essential structure or examples
  • agent_completeness - Agents missing scope definition or dependencies
  • command_completeness - Commands missing execution steps or prerequisites
  • command_consistency - Commands contradicting project guidelines

Custom Rule Example

Add custom rules to .drift.yaml:

rule_definitions:
  command_broken_links:
    description: "Commands contain broken file references"
    scope: project_level
    document_bundle:
      bundle_type: command
      file_patterns:
        - .claude/commands/*.md
      bundle_strategy: individual
    phases:
      - name: check_links
        type: markdown_link
        description: "Validate all markdown links and file paths"
        failure_message: "Found broken links"
        expected_behavior: "All file references should be valid"
        params:
          check_local_files: true
          check_external_urls: false

CLI Options

  • --format (-f): Output format (markdown or json)
  • --scope (-s): Analysis scope (conversation, project, or all)
  • --agent-tool (-a): Specific agent tool to analyze (e.g., claude-code)
  • --rules (-r): Comma-separated list of specific rules to check
  • --latest: Analyze only the latest conversation
  • --days (-d): Analyze conversations from last N days
  • --all: Analyze all conversations
  • --model (-m): Override model for analysis (sonnet, haiku)
  • --no-llm: Skip LLM-based rules, run only programmatic validation (fast)
  • --no-parallel: Disable parallel execution of validation rules
  • --project (-p): Project path (defaults to current directory)
  • --verbose (-v): Increase verbosity (-v, -vv, -vvv)

Development

# Run tests (requires 90%+ coverage)
./test.sh

# Run linters
./lint.sh

# Auto-fix formatting
./lint.sh --fix

Use Cases

During Development:

  • Run drift --no-llm before commits to catch broken links and dependency issues
  • Validate skill/command documentation is complete and consistent

In CI/CD:

  • Enforce documentation quality standards
  • Prevent broken resource references from merging

Periodic Reviews:

  • Analyze conversation patterns to identify workflow improvements
  • Find opportunities to better leverage agents, skills, and commands
  • Ensure project customizations are being utilized

Project Setup:

  • Validate new AI agent configurations
  • Ensure documentation follows best practices
  • Catch structural issues before they impact productivity

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai_drift-0.1.0.tar.gz (76.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

ai_drift-0.1.0-py3-none-any.whl (92.9 kB view details)

Uploaded Python 3

File details

Details for the file ai_drift-0.1.0.tar.gz.

File metadata

  • Download URL: ai_drift-0.1.0.tar.gz
  • Upload date:
  • Size: 76.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.13

File hashes

Hashes for ai_drift-0.1.0.tar.gz
Algorithm Hash digest
SHA256 cd6b60c0ed9c9cb8e5e28e398dcb250623d1151d4587e8a003b68684c38a18e8
MD5 f7ee215fd5ca5cef50b90df311e4cde6
BLAKE2b-256 f8b1474c3f6d19aab1c6a38f612cb98844b856910e6b6c101dc1a96274098c10

See more details on using hashes here.

File details

Details for the file ai_drift-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: ai_drift-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 92.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.11.13

File hashes

Hashes for ai_drift-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 91d241678e76b6356b11f348b34b19f0ccb81d9776615b7b59ee187fa261903c
MD5 b23980f59fcb9fc183aac50f0e6848ff
BLAKE2b-256 fc6d786c5fd0080301bee4838591ded7ff3fe964ab330979f7417b8ee8f6c38f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page