Skip to main content

Benchmark harness measuring AI coding tool+workflow performance, not just model capability

Project description

AI Workflow Benchmark (AWB)

Measure AI coding tool+workflow performance, not just model capability.

PyPI Tests Tasks Python License


AWB Demo — install, validate, run, analyze
Install from PyPI, validate 80 tasks, run vanilla vs custom, get capability profiles and improvement suggestions.

Why This Exists

SWE-bench tests models. AWB tests workflows. The same model running vanilla Claude Code vs. a purpose-built setup with a tuned CLAUDE.md, hooks, and structured agents produces meaningfully different results on real engineering tasks. No existing benchmark captures that gap — they all evaluate the model in isolation.

AWB benchmarks the full stack: tool + configuration + workflow + model, together, on 100 tasks drawn from real open-source repositories.

Quick Start

pip install awb

awb quickstart                              # verify your setup
awb run --runs 3 --parallel --adaptive      # full 100-task benchmark (parallel, smart re-runs)
awb run --category workflow --runs 1        # workflow tasks only (quick test)
awb gap results/runs/<run_dir>/             # analyze capability gaps

How It Works

Clone repo at pinned SHA
  → Run setup commands
  → Capture baseline lint/security counts
  → Execute tool with task prompt
  → Run test suite + partial credit rubric
  → Sigmoid-normalize 7 metrics
  → Produce weighted composite + capability profile

Each task starts from a fresh git clone at a pinned commit. Every tool gets the same prompt, the same timeout, and the same verification suite. Results are scored with sigmoid normalization so scores are never negative and never collapse at the boundary.

Scoring System

Seven dimensions, sigmoid-normalized with per-task baselines derived from difficulty:

Dimension Weight What It Measures
Correctness 55% Pass/fail (60%) + partial credit rubric (40%)
Cost efficiency 15% Estimated USD per task
Speed 10% Wall-clock seconds vs. estimated task time
Code quality 10% Lint warning delta (pre vs. post)
Reliability 5% Pre-existing tests broken by the change
Security 3% New security issues introduced
Efficiency 2% Tool turns used vs. task max

Sigmoid curve: score = 100 / (1 + exp(k * (value - baseline)))

  • Optimal performance (excellent) → ~95
  • Baseline performance (adequate) → ~50
  • Above baseline → smooth decay, never negative

Difficulty-weighted aggregation: hard tasks count 2.5×, medium 1.5×, easy 1.0×. A tool that solves hard tasks beats one that only solves easy ones even if the easy-task count is higher.

Per-task baselines by difficulty:

Metric Easy Medium Hard
Cost optimal / baseline $0.05 / $0.30 $0.20 / $1.00 $1.00 / $3.00
Speed 50% / 100% of estimated_minutes same same
Iterations 3 / max_iters 8 / max_iters 15 / max_iters

The 80 Tasks

Real open-source repos, pinned to release tag SHAs. Setup runs in under 15 seconds via venv + pip (Python) or npm (TypeScript).

Category Count Easy / Med / Hard What It Tests
bug-fix 12 7 / 1 / 4 Root cause analysis, test-first diagnosis, N+1 queries
feature-addition 9 3 / 0 / 6 Convention adherence, ambiguous requirements, Dockerfiles, TypeScript typing
refactoring 11 5 / 2 / 4 Multi-file consistency, O(n^2) optimization, CI/CD config, async migration
code-review 9 4 / 2 / 3 Security review (report-only), concurrency analysis, migration guides, OWASP
debugging 10 7 / 0 / 3 Performance profiling, regression bisection, stack trace diagnosis
multi-file 7 4 / 0 / 3 Merge conflicts, plugin systems, auth chains
legacy-code 12 9 / 0 / 3 SQLAlchemy 2.0 migration, 20-file codebase navigation, dead code removal
workflow 30 9 / 12 / 9 Completeness tracking, convention discovery, security methodology, context utilization, async safety, config extraction, test-driven implementation

Repos used: FastAPI, httpx, Flask, Starlette, Click, Pydantic, SQLAlchemy 2.0, Hono

Task IDs: BF-001–014 · FA-001–010 · RF-001–012 · CR-001–010 · DB-001–011 · MF-001–009 · LC-001–012 · WF-001–030

Capability Profiles

Each task maps to 1–3 capabilities, producing a radar chart of tool strengths:

Capability Tasks What It Measures
code_comprehension 41 Understanding existing code before modifying
framework_knowledge 35 Knowing API patterns (Pydantic v2, async SQLAlchemy, etc.)
bug_diagnosis 26 Structured root cause analysis, test-first diagnosis
refactoring_discipline 26 Changing code without breaking behavior
multi_file_reasoning 23 Coordinating changes across multiple files
completeness_tracking 10 Following all requirements, not stopping at 80%
convention_adherence 10 Discovering and following project conventions
security_methodology 10 Applying security checklists systematically
context_discovery 10 Reading project docs and config before editing
test_writing 10 Writing correct, meaningful tests
security_awareness 10 Identifying and fixing vulnerabilities
cost_discipline derived Token efficiency across all tasks

Example awb gap output:

Capability Profile
------------------
code_comprehension    ████████████████████  82.4  (n=27, conf=high)
framework_knowledge   ████████████████░░░░  68.1  (n=26, conf=high)
refactoring_discipline████████████████░░░░  65.3  (n=23, conf=high)
multi_file_reasoning  ████████████░░░░░░░░  51.2  (n=20, conf=high)
bug_diagnosis         ███████████████░░░░░  63.7  (n=17, conf=med)
test_writing          ██████████░░░░░░░░░░  44.1  (n=8,  conf=low)
security_awareness    █████████████░░░░░░░  55.8  (n=8,  conf=low)

Systematic Patterns
-------------------
- Fails 70%+ of multi_file_reasoning tasks → consider multi-agent workflows
- Token spend on failed hard tasks: $4.20 → add early-exit heuristics
- No failures on easy tasks → baseline is solid

Top Suggestions
---------------
1. Enable subagent mode for tasks spanning >3 files (impact: high)
2. Add repo-level CLAUDE.md with architecture overview (impact: medium)
3. Use --think flag for debugging tasks (impact: medium)

Workflow Lift Score

When awb run executes both vanilla and custom (the default), it produces a Workflow Lift — a single number measuring how much your workflow configuration improves over the raw model:

Workflow Lift: +4.2 pts  (p=0.031, significant)
  Pass rate: vanilla 62% vs custom 68%
  Wins: custom 8 / vanilla 3 / ties 69

  Where your workflow helps:
    bug diagnosis             +12.3 pts  (17 tasks)
    multi file reasoning       +8.1 pts  (20 tasks)
    security awareness         +5.4 pts  (10 tasks)

  Where it hurts:
    cost discipline            -4.2 pts  (80 tasks)

  Biggest task-level differences:
    BF-014   +40  (V=35 C=75)
    LC-012   +15  (V=65 C=80)

The lift is computed per-task (configured score minus vanilla score), averaged across all tasks, and tested for statistical significance. Capability-level breakdowns show where your workflow configuration actually helps vs. adds overhead.

CLI Reference

Command Description
awb run [tool] [options] Run benchmark tasks
awb gap <run_dir> Analyze capability gaps and generate improvement suggestions
awb compare <run1> <run2> Compare two runs with significance testing
awb export <run_dir> -o file.json Export results in external submission format
awb submit <file.json> Validate and display an external submission
awb compare-submissions <a.json> <b.json> Cross-tool comparison with statistics
awb quickstart Verify setup: tools available, tasks load, validation passes
awb info <task_id> Show task details
awb tools List registered adapters and availability
awb validate Validate all task YAMLs against schema
awb leaderboard Generate HTML leaderboard from run results
awb workflow <subcommand> Export, validate, diff, or init workflow descriptors
awb stability <run_dirs>... Per-task score stability report
awb calibrate-difficulty <run_dirs>... [--apply] Recalibrate difficulty labels from empirical pass rates
awb calibrate-timeouts <run_dirs>... [--apply] Tighten timeouts from empirical p95 data

Common options for awb run:

awb run                            # all tools, all tasks, 3 runs
awb run claude-code-custom         # single tool
awb run -t BF-001                  # single task
awb run --category legacy-code     # filter by category
awb run --difficulty hard          # filter by difficulty
awb run --capability bug_diagnosis # filter by capability
awb run --runs 1 --dry-run        # preview without executing
awb run --resume                   # skip tasks with existing results
awb run --parallel -j 4            # run 4 tasks concurrently
awb run --adaptive                 # re-run near-miss tasks (60-99%) after initial pass

Adding Tasks

Tasks live in awb/tasks/<category>/. Copy awb/tasks/_template.yaml:

id: BF-012
category: bug-fix
title: "Fix response_model silently dropping extra fields in FastAPI"
difficulty: easy
estimated_minutes: 15
languages: [python]
capabilities: [framework_knowledge, test_writing]

repo:
  url: "https://github.com/tiangolo/fastapi"
  commit: "628c34e0"
  setup_commands:
    - "python3 -m venv .venv && source .venv/bin/activate && pip install -e '.[all]'"

issue:
  description: |
    The endpoint's response_model silently strips extra fields...
  files_to_examine:
    - "fastapi/routing.py"

verification:
  test_commands:
    - "source .venv/bin/activate && python3 -m pytest tests/test_extra_fields.py -v"
  partial_credit:
    - criterion: "Uses Pydantic v2 ConfigDict"
      points: 50
      check: "grep -q 'ConfigDict' tests/test_extra_fields.py"
    - criterion: "Tests pass"
      points: 50
      check: "source .venv/bin/activate && python3 -m pytest tests/test_extra_fields.py -v"

constraints:
  max_iterations: 20
  timeout_seconds: 1800

Run awb validate to check your task before opening a PR. Full guide: CONTRIBUTING.md

Adding Tools

Implement the ToolAdapter ABC in awb/adapters/:

from awb.adapters.base import ToolAdapter, ToolResult
from pathlib import Path

class MyToolAdapter(ToolAdapter):
    name = "my-tool"
    display_name = "My Tool"

    async def execute(self, prompt: str, workspace: Path,
                      max_turns: int = 20, timeout_seconds: int = 1800) -> ToolResult:
        ...

    def check_available(self) -> bool:
        ...

    def get_config_hash(self) -> str:
        ...

Register in awb/adapters/registry.py and add an entry point in pyproject.toml.

External Submissions

Anyone can share results using the submission format defined in results/submission-schema.json:

awb run --runs 3
awb export results/runs/<run_dir>/ -o my-results.json
awb submit my-results.json                        # validate locally
awb compare-submissions a.json b.json             # compare with significance testing

The format captures tool version, model, hardware class, and per-task run results. Hardware classes (e.g., apple_m5_24gb, linux_x86_16gb) enable fair speed comparisons — only compared within the same tier.

Statistical Framework

  • Confidence intervals via t-distribution (no scipy required for core scoring)
  • Significance testing via sign test for paired tool comparison
  • Integrity checks: contamination detection (completions <10s flagged), variance anomalies (identical times/tokens across runs)
  • Weight profiles: default, correctness_focused, production (see awb/scoring/weights.yaml)
  • Stability metric: per-task TaskStability (std_dev, score_range, is_unstable); high-variance tasks can be down-weighted in composite scoring

Links

  • Methodology — Fair comparison principles, metric definitions, known limitations
  • Architecture — Module graph, data models, pipeline diagrams
  • Contributing — Adding tasks, tools, and submitting results
  • PyPIpip install awb

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

awb-0.5.2.tar.gz (175.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

awb-0.5.2-py3-none-any.whl (270.9 kB view details)

Uploaded Python 3

File details

Details for the file awb-0.5.2.tar.gz.

File metadata

  • Download URL: awb-0.5.2.tar.gz
  • Upload date:
  • Size: 175.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for awb-0.5.2.tar.gz
Algorithm Hash digest
SHA256 de114ea0795cbb2a06e5a7123734f50b98541732e611d42e208f3688cb1ff088
MD5 f3a193af327c66704d3a5b8dbb76cece
BLAKE2b-256 26f90efdabff1992abad749ecf3deee89df3e2d6fd71a74e663de16019e86277

See more details on using hashes here.

File details

Details for the file awb-0.5.2-py3-none-any.whl.

File metadata

  • Download URL: awb-0.5.2-py3-none-any.whl
  • Upload date:
  • Size: 270.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.9

File hashes

Hashes for awb-0.5.2-py3-none-any.whl
Algorithm Hash digest
SHA256 157aac543832b739ab864233ca21625bb5e58b69cb94d843960d2d2eb9acbd51
MD5 f7825dc6c4c5df7a31daedae6b470368
BLAKE2b-256 1e4f5f3ce09164306fd345b67776170583b2c93e0ad18323c6361ccb6c31fa5d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page