Skip to main content

Spec-driven, build passing code.

Project description

CI

Logo

👋 Welcome to Boot!

Your AI-powered code generator. Write a 5-line spec. Get a working code base that passes tests and builds.

Boot uses AI to generate production-ready code from simple specifications. No more boilerplate. No more setup hassle. Just describe what you want.


Install (30 seconds)

You'll need Python 3.11 or newer.

Step 1: Install the CLI

pip install boot-code

Step 2: Set your API keys

boot needs an API key from either OpenAI and/or Google Gemini.

The simplest way is to create a .env file in the project directory where you plan to run the command.

# Copy the .env file
cp .env.example .env

# Put your API keys inside .env
OPENAI_API_KEY="sk-..."
GEMINI_API_KEY="AIza..."

Use (2 minutes)

Run the main CLI module and follow the prompts:

# Generate a new pipeline from your spec
boot generate path/to/your/spec.toml

# See all available commands and options
boot --help

That's it. Answer a few questions, and watch your pipeline appear.


What You Get

  • Complete project structure with all the files you need.
  • Working code that's ready to run on your data.
  • Unit tests to ensure quality and reliability.
  • Visualizations to help you see your results.
  • Documentation so your team understands the pipeline.

Why Boot?

Instead of spending hours writing boilerplate, Boot generates:

Production-ready code following best practices
Complete test suites with 90%+ coverage
Interactive visualizations for immediate insights
Professional documentation your team will love
Modern tooling (Streamlit, Black, pytest)

Performance Comparison

Metric Manual Coding With Boot
Time to MVP 4-8 hours 2 minutes
Lines of code 200-500 Generated
Test coverage ~60% 90%+
Documentation Minimal Complete

Advanced Usage

Below is an example of more advanced usage, using all available flags:

boot generate examples/consumer_tech/spec.toml \
--provider gemini \ 
--model gemini-2.5-pro \
--api-key "AIzaSy..." \
--two-pass \
--temperature 0.1 \
--timeout 180 \
--output-dir Desktop/ecommerce_gemini_pro

poetry run boot generate examples/rust/my_rust_spec.toml --build-pass

Where:

  • --provider: Explicitly selects the LLM provider, overriding any default or .env setting.
  • --model: Specifies a particular model to use for the generation, rather than the default.
  • --api-key: Provides the API key directly on the command line, which takes precedence over any key in an .env file or other environment settings.
  • --two-pass: Enables the secondary review pass, where the initial code is sent back to the LLM for refinement and improvement
  • --temperature: Sets the generation temperature to a very low value, making the output more deterministic and less random.
  • --timeout: Sets the API request timeout, which is useful for complex specifications that may take the model longer to process.
  • --output-dir: Specifies a custom directory for the generated project files, overriding the default generated_jobs location.

Examples

Explore real-world use cases in the examples/ directory:

  • E-commerce - Top selling products analysis (SQL)
  • Healthcare - Patient length of stay analysis (SQL)
  • Finance - Stock volatility calculation (Python)
  • Energy - Renewable energy production analysis (Python)
  • Consumer Tech - Ad attribution pipeline (PySpark)

Supported Languages

Language Framework Use Case
Python pandas Data analysis, reporting
PySpark Spark Big data, distributed computing
SQL dbt-style Data warehousing, analytics

Development

See the DEVELOPER_GUIDE.md


Support

  • 📖 Documentation: Check the docs/ directory.
  • 🐛 Issues: Report bugs on GitHub Issues.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

boot_code-0.1.3.tar.gz (34.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

boot_code-0.1.3-py3-none-any.whl (44.9 kB view details)

Uploaded Python 3

File details

Details for the file boot_code-0.1.3.tar.gz.

File metadata

  • Download URL: boot_code-0.1.3.tar.gz
  • Upload date:
  • Size: 34.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.12.2 Darwin/24.2.0

File hashes

Hashes for boot_code-0.1.3.tar.gz
Algorithm Hash digest
SHA256 c0da281d9d509d804a2be3ad5af5c212bb0ae1c19839926d78dc1e665a12f075
MD5 49c1a9da5aeb7a89f97a0a619a16443c
BLAKE2b-256 8aea824d3ace489d56e196071f952724139d916569000c72357e80f78e3ff404

See more details on using hashes here.

File details

Details for the file boot_code-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: boot_code-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 44.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.12.2 Darwin/24.2.0

File hashes

Hashes for boot_code-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 fcd687f9d6573d09d4b2c6529a51ff1bb24d6f4d53edf3fb95e995772d948bd3
MD5 d82ac2124d400aef0113cd72d371a190
BLAKE2b-256 1a55ef41cb2e868517cdf548d532767a0a8a4eda4332eb78356f455fb7ff3056

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page