Skip to main content

Enterprise-grade prompt generator for ChatGPT and Claude Code with team collaboration features

Project description

Prompt Generator

CI/CD Pipeline codecov License: MIT

Enterprise-grade prompt generator for ChatGPT and Claude Code with web interface and CLI.

Generate structured, high-quality prompts with context, constraints, and deliverables for AI-powered development workflows.

Features

Core Functionality

  • ๐ŸŽฏ Two Target AI Models: ChatGPT and Claude Code optimized prompts
  • ๐ŸŒ Web Interface: Beautiful, responsive UI with dark/light theme
  • ๐Ÿ’ป CLI Tool: Command-line interface with interactive console mode
  • ๐Ÿ“ Structured Prompts: Organize prompts with task, context, constraints, deliverables, and tone

NEW: Advanced Features ๐Ÿš€

  • ๐ŸŽฎ Prompt Playground: Test prompts with real AI models (Anthropic, OpenAI, OpenRouter)
  • ๐Ÿ“š Prompt Library: Save, organize, and reuse your best prompts
  • ๐Ÿ”‘ Bring Your Own Key (BYOK): Securely store and manage API keys
  • ๐Ÿ”Œ Detached Mode: Generate prompts without API integration
  • ๐Ÿ” Encrypted Storage: API keys encrypted at rest with Fernet
  • ๐Ÿท๏ธ Tag System: Organize prompts with custom tags
  • โญ Favorites: Mark and filter your most-used prompts
  • ๐Ÿ“Š Usage Metrics: Track tokens used and response times

Enterprise Features

  • ๐Ÿ”’ Production-Ready Security: HTTPS, CSP, CORS, rate limiting, input validation
  • ๐Ÿ“Š Monitoring & Metrics: Prometheus metrics, structured logging, health checks
  • ๐Ÿš€ High Performance: Redis caching, Gunicorn workers, auto-scaling ready
  • ๐Ÿณ Containerized: Docker and Docker Compose for easy deployment
  • โ˜ธ๏ธ Kubernetes Ready: Complete K8s manifests with HPA, ingress, and StatefulSets
  • ๐Ÿ”„ CI/CD Pipeline: Automated testing, linting, security scanning, and deployment
  • ๐Ÿ“ˆ Scalable Architecture: Load balancing, connection pooling, horizontal scaling
  • ๐Ÿ›ก๏ธ Security Hardened: Non-root containers, read-only filesystem, security scanning
  • ๐Ÿ“š Comprehensive Tests: Unit, integration, and API tests with >80% coverage
  • ๐Ÿ“– API Documentation: OpenAPI 3.0 specification
  • ๐Ÿ”ง Operational Runbook: Complete deployment and troubleshooting guide

Quick Start

Web Interface (Recommended)

# Using Docker Compose (includes Redis, Nginx)
docker-compose up

# Access at http://localhost

CLI Usage

# Install
pip install -r requirements.txt

# Generate a prompt
python -m prompt_gen --target chatgpt --task "Summarize this article" --context "..."

# Interactive console mode
python -m prompt_gen --console

Installation

Prerequisites

  • Python 3.10+
  • Docker (for containerized deployment)
  • Redis (optional, for caching)

Development Setup

# Clone the repository
git clone https://github.com/yourorg/prompt-generator.git
cd prompt-generator

# Create virtual environment
python3 -m venv .venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Run development server
python app.py

Access the web interface at http://127.0.0.1:5000

Production Setup

# Set required environment variables
export SECRET_KEY=$(python -c "import secrets; print(secrets.token_urlsafe(32))")
export FLASK_ENV=production
export REDIS_URL=redis://localhost:6379/0

# Run with Gunicorn
gunicorn -c gunicorn.conf.py "app:create_app()"

See docs/RUNBOOK.md for complete production deployment guide.

Usage

Web Interface

  1. Navigate to http://localhost (or your deployment URL)
  2. Select your target AI model (ChatGPT or Claude Code)
  3. Fill in the required fields:
    • Task: What you want the AI to do (required)
    • Context: Background information, codebase details
    • Constraints: Restrictions, requirements, tech stack
    • Deliverables: Expected outputs
    • Tone: Desired communication style
  4. Click "Generate" to create your prompt
  5. Copy the generated prompt to use with your AI model

CLI Examples

# Basic usage
prompt-gen --task "Add error handling" --target claude_code

# With context and constraints
prompt-gen \
  --task "Implement user authentication" \
  --context "Flask API with PostgreSQL" \
  --constraints "Use JWT tokens, OAuth2" \
  --deliverables "Auth endpoints, tests, migration" \
  --tone "concise"

# Output to file
prompt-gen --task "Refactor database layer" --out prompt.txt

# JSON output (for scripting)
prompt-gen --task "Fix bug in checkout" --json

# Interactive console
prompt-gen --console

Console Commands

In console mode (prompt-gen --console):

prompt-gen> help
Available commands:
  new              Reset all fields
  fields           List available fields
  set <field> <value>    Set a field value
  target [name]    Get/set target AI model
  show             Display current configuration
  generate         Generate the prompt
  copy             Copy generated prompt to clipboard
  save <path>      Save generated prompt to file
  export <path>    Export configuration to JSON
  load <path>      Load configuration from JSON
  exit/quit        Exit console

Prompt Playground ๐ŸŽฎ

Test your prompts with real AI models directly in the browser:

  1. Navigate to Playground tab

  2. Select Provider:

    • Detached Mode: Generate prompts without API calls (default)
    • Anthropic Claude: Claude 3.5 Sonnet, Haiku, Opus
    • OpenAI: GPT-4o, GPT-4o-mini, GPT-4 Turbo
    • OpenRouter: Access to multiple models
  3. Add API Key (for non-detached mode):

    • Go to "API Keys" tab and add your keys
    • Or enter key inline (not saved)
  4. Test Your Prompt:

    • Enter prompt text
    • Click "Execute"
    • View response with token usage and timing

Example Use Cases:

  • Test different prompt variations
  • Compare model responses
  • Verify prompt effectiveness before production use
  • Debug prompt issues in real-time

Prompt Library ๐Ÿ“š

Save and organize your prompts for reuse:

Saving Prompts:

  1. Generate a prompt in the Generator tab
  2. Click "Save to Library"
  3. Prompt is automatically saved with metadata

Managing Library:

  • Search: Find prompts by name, description, or content
  • Filter: Show only favorites
  • Tags: Organize with custom tags
  • View Details: Click any prompt to see full content
  • Copy: One-click copy to clipboard
  • Delete: Remove prompts you no longer need

Example Workflow:

1. Create prompt: "Add authentication to Flask API"
2. Save to library with tags: ["authentication", "flask", "api"]
3. Mark as favorite for quick access
4. Later: Search "auth" โ†’ Find prompt โ†’ Copy โ†’ Use

API Key Management ๐Ÿ”‘

Securely store API keys for use in Playground:

Adding Keys:

  1. Go to "API Keys" tab
  2. Select provider (Anthropic, OpenAI, OpenRouter)
  3. Enter key name (e.g., "My Claude Key")
  4. Paste API key
  5. Click "Add Key"

Security:

  • Keys encrypted at rest using Fernet (AES)
  • Keys never logged or exposed in API responses
  • Only you can access your keys
  • Delete keys anytime

Using Stored Keys:

  • In Playground, select from "Stored API Key" dropdown
  • No need to re-enter keys each time
  • Switch between multiple keys easily

Environment Variables:

# Set encryption key (important!)
export ENCRYPTION_KEY=$(python -c "import secrets; print(secrets.token_urlsafe(32))")

# Database location
export DATABASE_URL=sqlite:///./prompt_generator.db

API Documentation

RESTful API for integration with other tools.

Endpoints

POST /api/v1/generate

Generate a prompt from JSON input.

Request:

{
  "target": "chatgpt",
  "task": "Summarize this article",
  "context": "Technical blog post about distributed systems",
  "constraints": "Keep it under 3 paragraphs",
  "deliverables": "Key points and action items",
  "tone": "concise"
}

Response:

{
  "prompt": "You are an expert assistant...",
  "cached": false
}

GET/POST /api/v1/library/prompts

List or create saved prompts.

Query Parameters (GET):

  • search: Search by name/description/task
  • tag: Filter by tag
  • favorites: Show only favorites (true/false)
  • limit: Max results (default: 50)
  • offset: Pagination offset

Request (POST):

{
  "name": "Auth Implementation",
  "task": "Add JWT authentication",
  "target": "claude_code",
  "tags": ["auth", "security"],
  "is_favorite": false
}

GET/PUT/DELETE /api/v1/library/prompts/:id

Get, update, or delete a specific prompt.

POST /api/v1/library/prompts/:id/favorite

Toggle favorite status of a prompt.

GET /api/v1/library/tags

Get all unique tags from saved prompts.

GET/POST /api/v1/keys

List or create API keys.

Request (POST):

{
  "provider": "anthropic",
  "key_name": "My Claude Key",
  "api_key": "sk-ant-..."
}

DELETE /api/v1/keys/:id

Delete an API key.

GET /api/v1/playground/providers

List available AI providers and their models.

POST /api/v1/playground/execute

Execute a prompt with an AI provider.

Request:

{
  "provider": "anthropic",
  "model": "claude-3-5-sonnet-20241022",
  "prompt": "Explain quantum computing",
  "key_id": 1  // or "api_key": "sk-..."
}

Response:

{
  "success": true,
  "response": "Quantum computing...",
  "model": "claude-3-5-sonnet-20241022",
  "tokens_used": 250,
  "duration_ms": 1523
}

GET /api/v1/playground/history

Get playground execution history.

GET /health

Health check endpoint for load balancers.

GET /ready

Readiness check with dependency validation.

GET /metrics

Prometheus metrics (restrict in production).

Full API specification: openapi.json

Architecture

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚   Load Balancer โ”‚
โ”‚   (Nginx/ALB)   โ”‚
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚
    โ”Œโ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”
    โ”‚  Nginx  โ”‚ (Reverse Proxy, Rate Limiting)
    โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”˜
         โ”‚
    โ”Œโ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
    โ”‚   Gunicorn  โ”‚ (WSGI Server, 4 workers)
    โ”‚   + Flask   โ”‚
    โ””โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
         โ”‚
    โ”Œโ”€โ”€โ”€โ”€โ–ผโ”€โ”€โ”€โ”€โ”
    โ”‚  Redis  โ”‚ (Caching, Rate Limiting)
    โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜

Key Components

  • Flask: Lightweight WSGI web framework
  • Gunicorn: Production WSGI server with gthread workers
  • Redis: In-memory cache and rate limit store
  • Nginx: Reverse proxy, SSL termination, rate limiting
  • Prometheus: Metrics collection and alerting
  • Structlog: Structured JSON logging

Configuration

All configuration via environment variables:

Variable Required Default Description
SECRET_KEY Yes* - Flask secret key (required in production)
FLASK_ENV No production Environment: production, development, testing
REDIS_URL No redis://localhost:6379/0 Redis connection URL
CACHE_ENABLED No true Enable response caching
RATE_LIMIT_ENABLED No true Enable rate limiting
RATE_LIMIT_PER_MINUTE No 60 Requests per minute per IP
LOG_LEVEL No INFO Log level: DEBUG, INFO, WARNING, ERROR
LOG_FORMAT No json Log format: json or text

Generate SECRET_KEY:

python -c "import secrets; print(secrets.token_urlsafe(32))"

See config.py for full configuration reference.

Development

Running Tests

# Install dev dependencies
pip install -r requirements.txt

# Run all tests with coverage
pytest --cov=prompt_gen --cov=. --cov-report=html

# Run specific test file
pytest tests/unit/test_core.py

# Run with verbose output
pytest -v

# Open coverage report
open htmlcov/index.html

Code Quality

# Lint with ruff
ruff check .

# Format code
ruff format .

# Type check
mypy app.py prompt_gen/

# Security scan
safety check --file requirements.txt

Local Development with Docker

# Build and run all services
docker-compose up

# Rebuild after code changes
docker-compose up --build

# Run in background
docker-compose up -d

# View logs
docker-compose logs -f web

# Stop all services
docker-compose down

Deployment

Docker

# Build image
docker build -t prompt-generator:v1.0.0 .

# Run container
docker run -d \
  -p 5000:5000 \
  -e SECRET_KEY=your-secret-key \
  -e REDIS_URL=redis://redis:6379/0 \
  --name prompt-generator \
  prompt-generator:v1.0.0

Kubernetes

# Deploy to Kubernetes
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/redis.yaml

# Check rollout status
kubectl rollout status deployment/prompt-generator -n prompt-generator

# View logs
kubectl logs -f deployment/prompt-generator -n prompt-generator

# Scale deployment
kubectl scale deployment/prompt-generator --replicas=5 -n prompt-generator

Cloud Providers

See platform-specific guides:

Monitoring

Health Checks

# Health endpoint (fast, no dependencies)
curl http://localhost:5000/health

# Readiness endpoint (checks dependencies)
curl http://localhost:5000/ready

Metrics

Prometheus metrics available at /metrics:

curl http://localhost:5000/metrics

Key metrics:

  • flask_http_request_total - Total requests
  • flask_http_request_duration_seconds - Latency
  • flask_http_request_exceptions_total - Errors
  • process_resident_memory_bytes - Memory usage

Logging

Structured JSON logs for easy parsing:

{
  "event": "request_completed",
  "method": "POST",
  "path": "/api/v1/generate",
  "status_code": 200,
  "request_id": "abc123",
  "timestamp": "2026-01-10T12:00:00Z"
}

Security

Security Features

  • โœ… HTTPS enforced with HSTS
  • โœ… Content Security Policy (CSP)
  • โœ… CORS protection
  • โœ… Rate limiting (60 req/min per IP)
  • โœ… Input validation and sanitization
  • โœ… Request size limits (1MB max)
  • โœ… Security headers (X-Frame-Options, X-Content-Type-Options)
  • โœ… Non-root container execution
  • โœ… Read-only root filesystem
  • โœ… Regular dependency vulnerability scans

Security Best Practices

  1. Never run with FLASK_DEBUG=true in production
  2. Generate a strong SECRET_KEY (32+ random bytes)
  3. Keep dependencies updated with pip-audit or safety
  4. Restrict /metrics endpoint to internal networks
  5. Use HTTPS with valid TLS certificates
  6. Monitor logs for suspicious activity
  7. Scan containers with Trivy or similar tools

Troubleshooting

Common Issues

Problem: High latency (>200ms)

  • Solution: Check Redis connection, enable caching, scale replicas

Problem: Rate limit errors (429)

  • Solution: Adjust RATE_LIMIT_PER_MINUTE, implement authentication

Problem: Memory usage growing

  • Solution: Check for leaks, adjust worker max_requests, restart pods

Problem: Cache misses

  • Solution: Verify Redis connectivity, check REDIS_URL configuration

See docs/RUNBOOK.md for complete troubleshooting guide.

Performance

Benchmarks

Tested on AWS t3.medium (2 vCPU, 4GB RAM):

Metric Value
p50 latency 35ms
p95 latency 78ms
p99 latency 145ms
Throughput 1,200 req/sec
Cache hit rate 85%
Memory per worker 80MB
CPU utilization 45% @ 1000 req/sec

Scaling Guidelines

  • < 100 req/sec: 2-3 replicas
  • 100-500 req/sec: 3-5 replicas
  • 500-1000 req/sec: 5-8 replicas
  • > 1000 req/sec: 8+ replicas with Redis cluster

Auto-scaling configured for CPU >70% and Memory >80%.

Contributing

We welcome contributions! Please see CONTRIBUTING.md for guidelines.

Development Workflow

  1. Fork the repository
  2. Create a feature branch: git checkout -b feature/amazing-feature
  3. Make your changes
  4. Run tests: pytest
  5. Run linters: ruff check . && ruff format .
  6. Commit: git commit -m "Add amazing feature"
  7. Push: git push origin feature/amazing-feature
  8. Open a Pull Request

License

This project is licensed under the MIT License - see LICENSE file for details.

Support

Changelog

v1.0.0 (2026-01-10)

Major Improvements:

  • โœจ Complete rewrite with enterprise-grade architecture
  • ๐Ÿ”’ Production-ready security (HTTPS, CSP, CORS, rate limiting)
  • ๐Ÿ“Š Monitoring and metrics (Prometheus, structured logging)
  • ๐Ÿš€ High performance (Redis caching, Gunicorn, auto-scaling)
  • ๐Ÿณ Docker and Kubernetes deployment
  • ๐Ÿ”„ CI/CD pipeline with automated testing and security scans
  • ๐Ÿ“š Comprehensive tests (unit, integration, API)
  • ๐Ÿ“– OpenAPI documentation
  • ๐Ÿ”ง Operational runbook

Breaking Changes:

  • API endpoints now versioned: /api/generate โ†’ /api/v1/generate
  • Configuration now via environment variables (no more hardcoded values)
  • Requires Redis for full feature set (optional, graceful degradation)

Migration Guide:

  • Update API calls to use /api/v1/generate
  • Set SECRET_KEY environment variable
  • Configure REDIS_URL for caching and rate limiting

Acknowledgments


Made with โค๏ธ for developers who love high-quality prompts

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

afterdark_prompt_generator-1.0.0.tar.gz (13.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

afterdark_prompt_generator-1.0.0-py3-none-any.whl (12.9 kB view details)

Uploaded Python 3

File details

Details for the file afterdark_prompt_generator-1.0.0.tar.gz.

File metadata

File hashes

Hashes for afterdark_prompt_generator-1.0.0.tar.gz
Algorithm Hash digest
SHA256 f564a78f824dae97ace35d3e983472b74055eaf043bc2bb7430bbb3c6139eeb8
MD5 e793e7f9184ea584b87d120f3dd29b58
BLAKE2b-256 812bd16829b266200424f4e3aefdb6001466212e550f5af6de6f39e5790e022a

See more details on using hashes here.

File details

Details for the file afterdark_prompt_generator-1.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for afterdark_prompt_generator-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 66b14124c5e8a7cf460ef266e3c361929b5195f0cf74e4f84d2e0c130836e9cd
MD5 37726599eb4cbd5a84b0827b92254a30
BLAKE2b-256 bbc4795f87eef4da61df5364845eac9b0bad3983b658b54570fa206d1750600a

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page