Skip to main content

Spec-driven, build passing code.

Project description

CI

Logo

👋 Welcome to Boot!

Your AI-powered code generator. Write a 5-line spec. Get a working code base that passes tests and builds.

Boot uses AI to generate production-ready code from simple specifications. No more boilerplate. No more setup hassle. Just describe what you want.


Install (30 seconds)

You'll need Python 3.11 or newer.

Step 1: Install the CLI

pip install boot-code

Step 2: Set your API keys

boot needs an API key from either OpenAI and/or Google Gemini.

The simplest way is to create a .env file in the project directory where you plan to run the command.

# Copy the .env file
cp .env.example .env

# Put your API keys inside .env
OPENAI_API_KEY="sk-..."
GEMINI_API_KEY="AIza..."

Use (2 minutes)

Run the main CLI module and follow the prompts:

# Generate a new pipeline from your spec
boot generate path/to/your/spec.toml

# See all available commands and options
boot --help

That's it. Answer a few questions, and watch your pipeline appear.


What You Get

  • Complete project structure with all the files you need.
  • Working code that's ready to run on your data.
  • Unit tests to ensure quality and reliability.
  • Visualizations to help you see your results.
  • Documentation so your team understands the pipeline.

Why Boot?

Instead of spending hours writing boilerplate, Boot generates:

Production-ready code following best practices
Complete test suites with 90%+ coverage
Interactive visualizations for immediate insights
Professional documentation your team will love
Modern tooling (Streamlit, Black, pytest)

Performance Comparison

Metric Manual Coding With Boot
Time to MVP 4-8 hours 2 minutes
Lines of code 200-500 Generated
Test coverage ~60% 90%+
Documentation Minimal Complete

Advanced Usage

Below is an example of more advanced usage, using all available flags:

boot generate examples/consumer_tech/spec.toml \
--provider gemini \ 
--model gemini-2.5-pro \
--api-key "AIzaSy..." \
--two-pass \
--temperature 0.1 \
--timeout 180 \
--output-dir Desktop/ecommerce_gemini_pro

poetry run boot generate examples/rust/my_rust_spec.toml --build-pass

Where:

  • --provider: Explicitly selects the LLM provider, overriding any default or .env setting.
  • --model: Specifies a particular model to use for the generation, rather than the default.
  • --api-key: Provides the API key directly on the command line, which takes precedence over any key in an .env file or other environment settings.
  • --two-pass: Enables the secondary review pass, where the initial code is sent back to the LLM for refinement and improvement
  • --temperature: Sets the generation temperature to a very low value, making the output more deterministic and less random.
  • --timeout: Sets the API request timeout, which is useful for complex specifications that may take the model longer to process.
  • --output-dir: Specifies a custom directory for the generated project files, overriding the default generated_jobs location.

Examples

Explore real-world use cases in the examples/ directory:

  • E-commerce - Top selling products analysis (SQL)
  • Healthcare - Patient length of stay analysis (SQL)
  • Finance - Stock volatility calculation (Python)
  • Energy - Renewable energy production analysis (Python)
  • Consumer Tech - Ad attribution pipeline (PySpark)

Supported Languages

Language Framework Use Case
Python pandas Data analysis, reporting
PySpark Spark Big data, distributed computing
SQL dbt-style Data warehousing, analytics

Development

See the DEVELOPER_GUIDE.md


Support

  • 📖 Documentation: Check the docs/ directory.
  • 🐛 Issues: Report bugs on GitHub Issues.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

boot_code-0.1.2.tar.gz (33.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

boot_code-0.1.2-py3-none-any.whl (43.4 kB view details)

Uploaded Python 3

File details

Details for the file boot_code-0.1.2.tar.gz.

File metadata

  • Download URL: boot_code-0.1.2.tar.gz
  • Upload date:
  • Size: 33.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.12.2 Darwin/24.2.0

File hashes

Hashes for boot_code-0.1.2.tar.gz
Algorithm Hash digest
SHA256 06fb0411b8f0c2711ede074026ea041805cb607c15802725d2d2ebcaf55989b5
MD5 6fa08c6d95a752cef457dbe53a76828a
BLAKE2b-256 b5fe93a0f5b0f66c0ba27799fdffb9cddd6bbba21e3e6235e5f69c37ab47ec3b

See more details on using hashes here.

File details

Details for the file boot_code-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: boot_code-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 43.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.12.2 Darwin/24.2.0

File hashes

Hashes for boot_code-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 13d5dc1eff34c36abc497143417d29c787b0b858a0b9a4c8321f344cd97a2429
MD5 ed4e70b9ee24f5202ba378f8a312358a
BLAKE2b-256 5c21c82d8e95487fd536665be0ebd613d1bbfdd6619df7ac842f71f0a1618532

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page