Skip to main content

AI-Powered Website Builder using Prompture agent orchestration

Project description

AgentSite

PyPI version License: MIT Python 3.10+ Docker Built with Prompture

Deploy on Railway Deploy to Heroku Deploy to Render

An AI-powered website builder that uses multi-agent orchestration to generate complete, production-ready websites from a single text prompt. Nine specialized agents — four core and five specialists — collaborate to plan, design, build, and review your site.

PyPI Package: pypi.org/project/agentsite


Why This Tool?

Most AI website builders give you a single LLM call that dumps out a generic template. The result is usually a wall of code with no real structure, inconsistent styling, and no quality checks. You end up spending more time fixing the output than you saved by generating it.

AgentSite takes a different approach: nine specialized AI agents collaborate in a pipeline, each handling what they're best at. A PM agent plans the site structure and selects the build strategy. A Designer agent defines the visual system. In monolithic mode, a single Developer agent writes all the code; in specialist mode, dedicated Markup, Style, Script, and Image agents work in parallel for faster builds. A Reviewer agent evaluates quality and can send work back for revision — just like a real team would.

The entire pipeline is model-agnostic. You can use OpenAI, Claude, Google, Groq, Ollama, LM Studio, or any provider supported by Prompture. Swap models without changing anything else.

You get two ways to work: a full Web UI with live preview, chat input, and real-time progress tracking — or a CLI for generating sites directly from the terminal. Both produce the same output: clean, semantic HTML with proper accessibility baked in.

Under the hood, the pipeline enforces quality gates. The Reviewer agent scores every page against criteria like accessibility, semantic markup, and visual consistency. If the score is too low, the Developer gets feedback and iterates — up to two revision loops — before the site is finalized.


Table of Contents


Quick Start

# 1. Install from PyPI
pip install agentsite

# 2. Set up your API keys
cp .env.copy .env
# Edit .env with your provider keys (OPENAI_API_KEY, CLAUDE_API_KEY, etc.)

# 3. Generate a website
agentsite generate "A portfolio website for a photographer"

That's it! A complete multi-page website will be generated in your output directory.

Prefer a UI? Launch the web interface instead:

agentsite serve
# Open http://127.0.0.1:6391

How It Works

AgentSite supports two build modes, chosen automatically by the PM agent based on site complexity:

Monolithic mode — a single Developer agent handles all code:

Prompt --> PM --> Designer --> Developer <--> Reviewer --> Website

Specialist mode — dedicated agents work in parallel for faster builds:

Prompt --> PM --> Designer --> Image -----> Reviewer --> Website
                              Markup --/
                              Style --/
                              Script -/

Core Agents

Agent Role Output
PM Analyzes the prompt, plans site structure, selects build strategy and agents SitePlan
Designer Defines colors, typography, spacing, and the visual system StyleSpec
Developer Writes semantic HTML, CSS, and vanilla JS for each page (monolithic mode) PageOutput
Reviewer Evaluates quality, accessibility, and correctness (score >= 7 = approved) ReviewFeedback

Specialist Agents

Agent Role Output
Markup Writes HTML/JSX markup with semantic structure and ARIA labels MarkupOutput
Style Writes CSS or SCSS stylesheets with custom properties and responsive design StyleOutput
Script Writes vanilla JavaScript for interactivity and animations ScriptOutput
Image Generates images and manages the asset library ImageOutput

The Reviewer can trigger revision loops, sending feedback back to the Developer or specialists until quality meets the approval threshold. This runs up to two iterations per page.


Features

Multi-Agent Pipeline

Nine agents with distinct personas coordinate through Prompture groups. Four core agents handle planning, design, development, and QA. Five specialist agents (Markup, Style, SCSS, Script, Image) can run in parallel for faster builds. Each agent has a focused role and structured output — no single monolithic prompt trying to do everything.

Real-Time Progress

WebSocket-based live updates during generation. Watch each agent work in real time through the Web UI with per-agent status, token usage, and timing.

Multi-Provider LLM Support

Use any model from any provider: OpenAI, Claude, Google, Groq, Grok, Ollama, LM Studio, OpenRouter, and more. Switch models per-generation without changing configuration.

Accessible Output

Agents enforce WCAG AA contrast, semantic HTML, ARIA labels, and keyboard navigation. Accessibility is built into the generation pipeline, not bolted on after.

Export

Download generated sites as ZIP archives or browse them directly through the built-in preview server.


CLI Reference

agentsite generate <prompt>       # Generate a website from a text prompt
  -m, --model <provider/model>    # LLM model to use (default: openai/gpt-4o)
  -o, --output <dir>              # Output directory
  -n, --name <name>               # Project name

agentsite serve                   # Start the web UI server
  --host <host>                   # Server host (default: 127.0.0.1)
  --port <port>                   # Server port (default: 6391)
  --reload                        # Enable auto-reload for development

agentsite models                  # List available LLM models

Web UI

Launch the browser-based interface for a full visual experience:

agentsite serve

The Web UI includes:

  • Dashboard — manage projects, create new sites
  • Page Builder — chat-based generation with live preview
  • Agent Monitoring — see each agent's status, metrics, and activity
  • Analytics — token usage, cost breakdown, and generation history

For development, run the backend and frontend separately with hot-reload:

# Terminal 1: Backend
agentsite serve --reload

# Terminal 2: Frontend (Vite dev server)
cd frontend && npm run dev

Configuration

Variable Description Default
AGENTSITE_DEFAULT_MODEL LLM model for all agents openai/gpt-4o
AGENTSITE_DATA_DIR Project storage directory ~/.agentsite
AGENTSITE_HOST Server bind address 127.0.0.1
AGENTSITE_PORT Server port 6391

Provider API keys (OPENAI_API_KEY, CLAUDE_API_KEY, GOOGLE_API_KEY, etc.) are inherited from Prompture's configuration.


Project Structure

agentsite/
  agents/            # Agent factories, Prompture personas, orchestration
    personas.py      # All agent persona definitions (core + specialists)
    orchestrator.py  # Pipeline wiring, dynamic mode selection, parallel groups
    registry.py      # Centralized agent registry with auto-discovery
    specialists/     # Specialist agents (markup, style, script, image)
  api/               # FastAPI application
    routes/          # REST endpoints (projects, generate, models, assets, preview)
    websocket.py     # WebSocket manager for real-time progress
  engine/            # Core generation logic
    pipeline.py      # Orchestrates agents, handles file output and events
  storage/           # Persistence layer
    database.py      # Async SQLite via aiosqlite
    repository.py    # CRUD operations for projects and generations
  cli.py             # Click CLI entry point
  config.py          # Pydantic-settings (env vars, defaults)
  models.py          # Domain models (SitePlan, StyleSpec, PageOutput, etc.)
frontend/            # React 19 + Vite 6 + Tailwind CSS 4 SPA
tests/               # pytest test suite

Tech Stack

Layer Technology
Agent orchestration Prompture
API server FastAPI + Uvicorn
Database SQLite via aiosqlite
CLI Click
Config Pydantic Settings
Frontend React 19 + Vite 6 + Tailwind CSS 4
Linting Ruff

Development

# Install with dev + test extras
pip install -e ".[dev]"

# Run tests
pytest

# Lint
ruff check .

# Format
ruff format .

# Build frontend
cd frontend && npm install && npm run build

Troubleshooting

Common Issues

Generation fails immediately?

  • Check that your .env has valid API keys for the provider you're using
  • Run agentsite models to verify your provider is reachable

Empty or broken output?

  • Try a different model — some smaller models struggle with structured output
  • Check the Reviewer feedback in the Web UI for specific issues

Frontend not loading?

  • Make sure you've built the frontend: cd frontend && npm run build
  • For development, run npm run dev separately on port 5173

WebSocket disconnects?

  • The generation is still running server-side — refresh the page to reconnect
  • Check the terminal output for any backend errors

Contributing

Contributions welcome! Here's how:

  1. Report bugsGitHub Issues
  2. Improve docs — PRs for documentation improvements
  3. Submit PRs — Bug fixes and features
  4. Add providers — Extend LLM provider support via Prompture

License

This project is licensed under the MIT License. See the LICENSE file for full details.


Get Help


Built by Juan Denis

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

agentsite-0.0.12.tar.gz (76.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

agentsite-0.0.12-py3-none-any.whl (83.1 kB view details)

Uploaded Python 3

File details

Details for the file agentsite-0.0.12.tar.gz.

File metadata

  • Download URL: agentsite-0.0.12.tar.gz
  • Upload date:
  • Size: 76.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for agentsite-0.0.12.tar.gz
Algorithm Hash digest
SHA256 9043cd57640a92ba86c29528a766be410c89777c03ac79a0651a31d7374394aa
MD5 99a0e6641208e1c55d2f0cfa65cb3d76
BLAKE2b-256 c34604f4d06a25e10b5bb69c3ba8a98e643180ae7e7209af660a62cf66836688

See more details on using hashes here.

File details

Details for the file agentsite-0.0.12-py3-none-any.whl.

File metadata

  • Download URL: agentsite-0.0.12-py3-none-any.whl
  • Upload date:
  • Size: 83.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for agentsite-0.0.12-py3-none-any.whl
Algorithm Hash digest
SHA256 cf44d9eedea36aa245e538acbbf9cd71fc1a8f4ea0b4e206526acabc317b5b69
MD5 1d87fa0f4a72d78ddf2af33ee7a2d706
BLAKE2b-256 10463443219e2751a45361fbf05ccea54a337466cea5865f1f6e5a165ed2d3ae

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page