Skip to main content

Structured AI-assisted development framework with plan lifecycle, review gates, and continuous improvement.

Project description

AgentScaffold

Stop paying for your AI agent to rediscover your codebase every session.

AgentScaffold is a governance framework and persistent knowledge graph for AI coding agents. It replaces the expensive pattern of agents reading dozens of files, grepping for symbols, and tracing dependencies from scratch -- with a single tool call that returns exactly what the agent needs.

The Problem

Every time you start a new session with Cursor, Claude Code, Codex, or any AI coding agent, it starts from zero. It reads your files. It greps for imports. It traces call chains. It burns through your token budget and subscription quota just to understand what it already understood yesterday.

On a moderately complex codebase, a single "understand this module" task can cost 12 file reads + 2 grep searches before the agent even starts working. A full plan review pulls in 10+ files. Getting oriented in a new codebase means reading 38+ files.

This is the hidden cost of agentic development: not the coding, but the context building.

The Solution

AgentScaffold builds a knowledge graph of your codebase -- code structure, dependencies, governance artifacts, session history -- and exposes it through MCP tools that your agent calls instead of reading raw files.

Measured results from our latest evaluation harness run (79 scenarios, 100% pass rate):

Task Without AgentScaffold With AgentScaffold Savings
Understand a module and its dependents 12 reads + 2 greps 1 tool call 97% fewer tokens, 93% fewer calls
Codebase orientation 38 file reads 2 tool calls 77% fewer tokens, 95% fewer calls
Impact analysis (blast radius) 12 file reads 1 tool call 88% fewer tokens, 92% fewer calls
Find all code matching a concept 8 file reads 1 tool call 44% fewer tokens, 88% fewer calls
Full plan review with evidence 10 file reads 1 tool call 90% fewer calls (richer output)

Capability aggregate: 91% average call reduction. 58% average token reduction. 2.8x overall compression.

Capability vs behavioral reality

We now report two views so results are not sugar-coated:

  • Capability efficiency (raw): what the tools can do when selected (58% token and 91% call reduction average).
  • Behavior-adjusted efficiency: capability gains multiplied by tool-routing adherence proxy.

Current harness outputs:

View Token Reduction Call Reduction
Raw capability 58.3% 91.4%
Behavioral (replay-adjusted) 43.7% 68.5%
Quality-adjusted behavioral 39.4% 61.7%

Behavioral and quality-adjusted values come from replay traces (observed tool-call sequences + quality parity checks), not just phrase-level intent matching.

Every tool call your agent doesn't make is money you don't spend on API tokens or subscription overages. And because the governance framework catches flawed assumptions and missing edge cases before implementation, you also spend less time fixing bugs that should never have been written.

What It Does

AgentScaffold combines two capabilities that don't exist together in any other tool:

1. Agent Governance Framework

A structured development workflow that teaches your AI agent to follow a plan lifecycle with quality gates:

  • Plan lifecycle: Draft -> Review -> Ready -> In Progress -> Complete
  • Adversarial reviews: Devil's advocate, expansion analysis, domain-specific reviews -- all run before a single line of code is written
  • Interface contracts: Formal declarations of module boundaries, versioned and tracked
  • Retrospectives: Post-execution learning that feeds back into the process
  • Session tracking: State files that persist context across chat sessions

Think of it as a virtual sprint team. Most AI agents work alone -- they take instructions and start coding. AgentScaffold puts your agent on a team. Before it writes a single line of code, the plan faces a devil's advocate who asks "what if this breaks?", an expansion reviewer who asks "what did you miss?", and a domain expert -- a quant architect, a UX designer, a security engineer -- who pressure-tests the approach through the lens of your specific domain. These adversarial reviews catch flawed assumptions, missing edge cases, and architectural blind spots before they become bugs in production.

After implementation, the sprint continues. A post-implementation review verifies what was built against what was planned. A retrospective captures what worked, what didn't, and what to do differently. Those findings flow into the learnings tracker, which feeds back into the agent's rules and templates -- so the next sprint starts sharper than the last. This is the same continuous improvement loop that makes experienced engineering teams get better over time, applied to your AI agent.

The result: tighter plans that survive expert scrutiny, more robust implementations with edge cases identified up front, and a codebase that accumulates institutional knowledge rather than losing it between sessions.

2. Persistent Knowledge Graph

A KuzuDB-backed graph that indexes your codebase once and serves it to agents instantly:

  • Code structure: Functions, classes, methods, interfaces, import chains, call graphs -- across Python, TypeScript, Go, Rust, Java, C, and C++
  • Governance artifacts: Plans, contracts, learnings, review findings linked to the code they reference
  • Community detection: Leiden algorithm clustering identifies tightly coupled modules
  • Semantic search: Hybrid search combining structural graph queries with vector embeddings
  • Incremental indexing: SHA-256 content hashing means only changed files are re-processed
  • Contract drift detection: Automatically surfaces methods declared in contracts but missing from code

The graph is exposed via MCP tools that any compatible agent can call, or through the CLI for direct use.

Quick Start

pip install agentscaffold
cd my-project
scaffold init
scaffold index          # Build the knowledge graph

The init command scaffolds your project with:

  • docs/ai/ -- templates, prompts, standards, state files
  • AGENTS.md -- rules your AI agent follows automatically
  • .cursor/rules.md -- Cursor-specific rules
  • scaffold.yaml -- your project's framework configuration
  • justfile + Makefile -- task runner shortcuts
  • .github/workflows/ -- CI with security scanning

The index command builds the knowledge graph at .scaffold/graph.db, enabling search, reviews, impact analysis, and session memory.

Install with language support

pip install agentscaffold[graph]              # Python, JS, TS
pip install agentscaffold[graph-all-languages] # + Go, Rust, Java, C, C++
pip install agentscaffold[all]                # Everything

How Agents Use It

MCP Tools (for AI agents)

When you run scaffold mcp, these tools become available to your agent.

You don't need to memorize tool names. AgentScaffold ships with intent descriptions and an MCP-first routing policy -- natural language trigger phrases plus fallback rules that push the agent to use MCP tools first, then allow direct reads/search if output is insufficient. Say "let's review plan 42" and the agent calls scaffold_prepare_review. Say "where did we leave off?" and it calls scaffold_orient. Run scaffold agents cursor (or windsurf, claude) to generate platform-specific rules that wire this up for your IDE.

Composite tools -- single calls that replace entire multi-step workflows:

Tool What It Replaces
scaffold_prepare_review Reading plan, contracts, learnings, and source to prepare a full adversarial review
scaffold_prepare_implementation Tracing dependencies, checking contracts, and verifying readiness before coding
scaffold_orient Reading 38+ files to understand project state, blockers, and next steps
scaffold_decision_context Tracing the full decision chain (ADRs, spikes, studies) behind a plan
scaffold_staleness_check Manually comparing plan dates, file changes, and overlapping completed work
scaffold_compare_plans Reading two plans and their file impacts to identify conflicts
scaffold_prepare_retro Gathering verification results, study outcomes, and retro insights
scaffold_find_studies Searching study files by topic, tags, or outcome
scaffold_find_adrs Searching architecture decision records by topic or status

Granular tools -- building blocks for custom queries:

Tool What It Replaces
scaffold_context Reading 12+ files to understand a symbol, its callers, and its layer
scaffold_impact Manually tracing imports and grep-searching for consumers
scaffold_search Multiple grep passes to find code by concept
scaffold_review_context Reading plan files, contracts, and source to prepare a single review type
scaffold_stats Scanning the entire directory tree to understand codebase shape
scaffold_validate Running separate staleness checks and contract verification
scaffold_query Writing ad-hoc Cypher queries against the knowledge graph

CLI (for humans)

scaffold plan create my-feature        # Create a plan from template
scaffold plan lint --plan 001          # Validate plan structure
scaffold plan status                   # Dashboard of all plans
scaffold validate                      # Run all enforcement checks
scaffold retro check                   # Find missing retrospectives
scaffold agents generate               # Regenerate AGENTS.md
scaffold agents cursor                 # Regenerate .cursor/rules.md
scaffold import chat.json --format chatgpt  # Import conversation
scaffold ci setup                      # Generate CI workflows
scaffold metrics                       # Plan analytics
scaffold graph search "data routing"   # Hybrid search
scaffold graph verify                  # Graph accuracy check
scaffold review brief 42               # Pre-review brief for plan 42
scaffold review challenges 42          # Adversarial challenges with evidence
scaffold session start --plan 42       # Start a tracked coding session

Execution Profiles

Interactive (default): Human + AI agent in an IDE conversation. The agent follows AGENTS.md, asks questions when uncertain.

Semi-Autonomous (opt-in): Agent invoked from CLI/CI without a human present. Adds session tracking, safety boundaries, notification hooks, structured PR output, and cautious execution rules.

Both profiles coexist in the same AGENTS.md. The agent self-selects based on invocation context.

Rigor Levels

  • Minimal: Lightweight gates for prototypes and small projects
  • Standard: Full plan lifecycle with reviews, contracts, and retrospectives
  • Strict: All gates enforced, all plans require approval

Domain Packs

The governance framework is domain-aware. Domain packs teach the adversarial reviewers to think like specialists in your field -- a trading pack adds a quant architect who challenges risk assumptions and position sizing logic, a webapp pack adds a UX reviewer who flags accessibility gaps and performance regressions. Each pack includes tailored review prompts, implementation standards, and approval gates specific to the domain:

Pack Focus
trading Quantitative finance, RL, traceability
webapp UX/UI, accessibility, performance budgets
mlops Model lifecycle, experiment tracking, drift detection
data-engineering Pipeline quality, schema evolution, SLAs
api-services API design, backward compatibility, contract testing
infrastructure IaC, deployment safety, cost analysis
mobile Platform guidelines, offline-first, app store compliance
game-dev Game loops, ECS, frame budgets
embedded Memory constraints, real-time deadlines, OTA safety
research Reproducibility, statistical rigor, experiment protocol
scaffold domain add trading
scaffold domain add webapp

Documentation

Full documentation is in docs/:

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentscaffold-0.2.2.tar.gz (374.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentscaffold-0.2.2-py3-none-any.whl (391.8 kB view details)

Uploaded Python 3

File details

Details for the file agentscaffold-0.2.2.tar.gz.

File metadata

  • Download URL: agentscaffold-0.2.2.tar.gz
  • Upload date:
  • Size: 374.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.6

File hashes

Hashes for agentscaffold-0.2.2.tar.gz
Algorithm Hash digest
SHA256 ac0bc82f203a80c8fe7b12c5de61c69acee1aa6cfce9ab7ea291f757ec0a1c8d
MD5 978bb601ec2e8d2574a77d23687c6277
BLAKE2b-256 e2c7a94aad8766bf539d2305a69bb7b0d1e967a8e7953596e28adf028b67b702

See more details on using hashes here.

File details

Details for the file agentscaffold-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: agentscaffold-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 391.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.6

File hashes

Hashes for agentscaffold-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 6e8abc4b2cde9f1e2650ed408f66b88a3107184f78e5f86f1f0fa9a9d27eb312
MD5 3fb5a27de738d8ec6177a0e76905a983
BLAKE2b-256 de1a8dc55337d0332272ea1871219ffaf32f6befcd6d94054cc2bcd6de77a3ee

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page