Skip to main content

Autonomous idea-to-product pipeline — one idea in, working project out

Project description

Summon

PyPI version Python versions License: MIT CI

When I have an idea, I want a working project — not "almost works".

You shouldn't need to write a spec, plan an architecture, scaffold files, wire up tests, and package a release just to validate an idea. Summon collapses the entire software development lifecycle into a single command.

summon run "CLI tool that converts markdown to PDF" -o ./md2pdf

What comes out isn't scaffolding or boilerplate. It's a working project with real implementations, tests that pass, and packaging ready to go.

The problem

Turning an idea into a working project requires a long chain of tasks that are individually straightforward but collectively exhausting:

  • Clarify what you actually mean (resolve ambiguities in your own idea)
  • Write a spec, then an architecture doc, then a design
  • Implement each component, review the code, fix issues
  • Wire everything together, validate imports, write tests, fix failures
  • Package it, write docs, set up CI

Each step depends on the last. Skip one and the rest fall apart. Do them all manually and you've burned hours before writing a line of real logic.

Summon's job: take a plain-English description and do all of that — automatically, with quality gates and fix loops at every stage — so you get a working project back.

How it works

Your idea flows through six stages. Each stage has built-in feedback loops that catch and fix problems before moving on.

flowchart TD
    A["Your idea (plain English)"] --> B

    B["1. Ideate"]:::stage --> C
    B -.- B1["Detect ambiguities → Self-clarify\n→ Write spec → Validate"]

    C["2. Plan"]:::stage --> D
    C -.- C1["Write PRD → Write SDD → Critic review"]
    C1 -- "rejected (up to 3x)" --> C1

    D["3. Design"]:::stage --> E
    D -.- D1["Architect HLD → Split into components"]

    E["4. Build"]:::stage --> F
    E -.- E1["Per component, in parallel:\nLLD → Code → Review"]
    E1 -- "rejected (up to 2x)" --> E1

    F["5. Test"]:::stage --> G
    F -.- F1["Integrate → Degeneracy check\n→ Import validation → Unit tests\n→ Acceptance tests"]
    F1 -- "failing (auto-fix loops)" --> F1

    G["6. Release"]:::stage --> H
    G -.- G1["Package → Docs → GitHub config"]

    H["Working project"]:::output

    classDef stage fill:#2d333b,stroke:#58a6ff,color:#e6edf3
    classDef output fill:#1a7f37,stroke:#3fb950,color:#fff

If tests fail, a bug-fixer agent reads the errors and patches the code. If the critic rejects the plan, the planner revises. If imports break, an import-fixer rewires them. If generated code is degenerate (all stubs, repetitive, or truncated), it gets regenerated from scratch. Every fix loop has a retry cap so runs always terminate.

Quickstart

# Install
git clone https://github.com/npow/summon && cd summon
uv sync --all-extras

# Set your API key
export ANTHROPIC_API_KEY=sk-...

# Build something
summon run "youtube transcriber that takes a URL and returns the transcript as text" -o ./yt-transcriber

Stepped workflow

Run stages individually to inspect and edit between steps:

summon ideate "your idea"              # idea → spec.json
summon plan my-tool.spec.json          # spec → plan.json
summon design my-tool.plan.json        # plan → design.json
summon build my-tool.design.json -o .  # design → working project

Each stage outputs a JSON file you can read, modify, and feed into the next stage.

Configuration

Works with Claude (default) or OpenAI models:

models:
  supervisor: "claude-sonnet-4-20250514"   # or "gpt-4o"
  coder: "claude-sonnet-4-20250514"
  test_writer: "gpt-4o-mini"              # cheaper models for simpler tasks
summon run "your idea" -c summon-openai.yaml   # full OpenAI config

Options

-c, --config PATH    Config file (default: summon.yaml)
-o, --output PATH    Output directory
-v, --verbose        Show what's happening
--skip-gates         Skip quality gates
--dry-run            Skip GitHub/publishing

Limitations

  • Python-only (for now). The full pipeline — degeneracy detection, import validation, acceptance tests, and packaging — is built for Python projects. TypeScript and Go have basic dep-install and test-run support but no quality gates or fix loops.
  • Single-process CLI tools and libraries. Summon works best for self-contained projects: CLI tools, libraries, data scripts. It doesn't generate infrastructure, databases, frontends, or multi-service architectures.
  • LLM cost. A full run makes many LLM calls across 6 stages. Simple ideas may cost a few dollars; complex ones with multiple retry loops will cost more.
  • No interactive clarification. Ambiguities in your idea are resolved by the LLM, not by asking you. If the LLM guesses wrong, edit the spec JSON and re-run from that stage.

Requirements

Contributing

git clone https://github.com/npow/summon && cd summon
uv sync --all-extras
uv run pytest tests/ -v

License

MIT

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

summon_ai-0.1.0.tar.gz (45.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

summon_ai-0.1.0-py3-none-any.whl (63.0 kB view details)

Uploaded Python 3

File details

Details for the file summon_ai-0.1.0.tar.gz.

File metadata

  • Download URL: summon_ai-0.1.0.tar.gz
  • Upload date:
  • Size: 45.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for summon_ai-0.1.0.tar.gz
Algorithm Hash digest
SHA256 8f5ba2bcbd270a569f19c9593b94790f0ef364b140a2f1b0c7e493f5738260ac
MD5 16821fcbcfac1b9cc12c1f5f23c89ccd
BLAKE2b-256 6e1b7d93e3f3f757fe07b35c2027bed0c0bc743324cec49a0ae5e664ee1ae0fd

See more details on using hashes here.

Provenance

The following attestation bundles were made for summon_ai-0.1.0.tar.gz:

Publisher: publish.yml on npow/summon

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file summon_ai-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: summon_ai-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 63.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for summon_ai-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 bc3ff5ccd6e2b9a466b9652489c77fad503568687742b6f83e2fd1e851eaf559
MD5 bf47fa1d0e8ca0e8057843b5a3bb7c78
BLAKE2b-256 41ddb75990245838cd1c53140e6bf5228a1ee1e0c80b981adc1ad5c3cabd7149

See more details on using hashes here.

Provenance

The following attestation bundles were made for summon_ai-0.1.0-py3-none-any.whl:

Publisher: publish.yml on npow/summon

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page