Skip to main content

AI-powered image generation assistant supporting multiple models (Gemini, Seedream)

Project description

CLI client and web server to easily use nano banana image editing model

Installation

poetry install

Setup

Configuration

The application uses environment variables for configuration. You can set these either through:

  1. .env file (recommended) - Copy .env.example to .env and customize
  2. Environment variables - Set directly in your shell (overrides .env values)

Option 1: Using .env File (Recommended)

# Copy the example file and edit it
cp .env.example .env

# Edit .env with your actual values
# At minimum, you need to set GEMINI_API_KEY

Option 2: Using Environment Variables

# Required: Gemini API key for image generation
export GEMINI_API_KEY="your-api-key-here"

# Optional: Flask debug mode (development only)
export FLASK_DEBUG="true"  # Enable debug mode in development
# export FLASK_DEBUG="false"  # Disable debug mode (default/production)

Security Notes:

  • FLASK_DEBUG is disabled by default for security reasons. Only enable it in development environments, never in production.
  • Never commit your .env file to version control - it contains sensitive information!
  • The .env file is already in .gitignore to prevent accidental commits.

AWS S3 Configuration (Optional)

The application supports storing generated images in AWS S3 instead of the local filesystem. You can configure S3 either in your .env file or via environment variables:

Using .env file:

# Add these to your .env file
STORAGE_TYPE=s3
AWS_S3_BUCKET=your-s3-bucket-name
AWS_S3_REGION=us-east-1
AWS_PROFILE=your-aws-profile  # or use direct credentials

Using environment variables:

# S3 Storage Configuration
export STORAGE_TYPE="s3"                    # Use "s3" for AWS S3, "local" for filesystem (default)
export AWS_S3_BUCKET="your-s3-bucket-name" # S3 bucket name for image storage
export AWS_S3_REGION="us-east-1"           # AWS region where your bucket is located

# AWS Credentials (use one of the following methods)
# Method 1: Environment variables
export AWS_ACCESS_KEY_ID="your-access-key"
export AWS_SECRET_ACCESS_KEY="your-secret-key"

# Method 2: AWS CLI profiles (recommended)
# Configure with: aws configure --profile your-profile
export AWS_PROFILE="your-profile"

# Method 3: IAM roles (for EC2/Lambda deployment)
# No additional configuration needed if running on AWS with proper IAM roles

S3 Setup Requirements:

  1. Create an S3 bucket in your desired AWS region
  2. Ensure your AWS credentials have the following permissions for the bucket:
    • s3:PutObject - Upload generated images
    • s3:GetObject - Download images (if needed)
    • s3:DeleteObject - Clean up old images
    • s3:ListBucket - List bucket contents

Usage

CLI

Basic usage

$ poetry run python nano_api/generate.py \
    --prompt "please make the women in the provided image look affectionately at each other" \
    --image samples/base.png

Advanced usage with all parameters

$ poetry run python nano_api/generate.py \
    --prompt "a futuristic cityscape with flying cars" \
    --image samples/base.png \
    --image samples/reference.png \
    --output custom_output.png \
    --output-dir ./generated \
    --project-id my-gcp-project \
    --location us-central1 \
    --scale 4

S3 storage examples

# Use S3 storage (requires S3 environment variables to be set)
$ poetry run python nano_api/generate.py \
    --prompt "a beautiful landscape" \
    --image samples/base.png \
    --storage-type s3 \
    --output-dir generated-images

# Force local storage (override S3 configuration)
$ poetry run python nano_api/generate.py \
    --prompt "a city at night" \
    --image samples/base.png \
    --storage-type local \
    --output-dir ./local-output

Command line parameters

  • --prompt: Text prompt for image generation (optional, defaults to sample prompt)
  • --image: Path to reference image(s), can be used multiple times
  • --output: Output filename (default: "generated_gemini_image.png")
  • --output-dir: Directory where generated files will be saved (default: current directory)
  • --project-id: Google Cloud Project ID (defaults to value in conf.py)
  • --location: Google Cloud region (defaults to value in conf.py)
  • --scale: Upscale factor, 2 or 4 (optional, enables automatic upscaling)
  • --storage-type: Storage backend - "local" for filesystem or "s3" for AWS S3 (overrides configuration)

Web server

Start the server

$ poetry run python nano_api/main.py

Make a request to the web API

# Basic request
$ curl -X POST \
    -F "prompt=please make the women in the provided image look affectionately at each other" \
    -F "images=@samples/base_2.png" \
    http://127.0.0.1:5000/generate

# Request with custom output directory
$ curl -X POST \
    -F "prompt=create a sunset landscape" \
    -F "images=@samples/base.png" \
    -F "output_dir=./api_generated" \
    http://127.0.0.1:5000/generate

# Multiple images
$ curl -X POST \
    -F "prompt=blend these images creatively" \
    -F "images=@samples/image1.png" \
    -F "images=@samples/image2.png" \
    -F "output_dir=./results" \
    http://127.0.0.1:5000/generate

# S3 storage examples
# Save to S3 (requires S3 environment variables to be set)
$ curl -X POST \
    -F "prompt=a mountain landscape" \
    -F "images=@samples/base.png" \
    -F "storage_type=s3" \
    -F "output_dir=generated-images" \
    http://127.0.0.1:5000/generate

# Force local storage (override S3 configuration)
$ curl -X POST \
    -F "prompt=a city skyline" \
    -F "images=@samples/base.png" \
    -F "storage_type=local" \
    -F "output_dir=./local-results" \
    http://127.0.0.1:5000/generate

API Parameters

  • prompt: Text prompt for image generation (required)
  • images: Image file(s) to upload (required, can be multiple)
  • output_dir: Directory where generated files will be saved (optional, default: ".")
  • storage_type: Storage backend - "local" for filesystem or "s3" for AWS S3 (optional, uses configuration default)

API Response

{
    "message": "Files uploaded successfully",
    "prompt": "your prompt text",
    "saved_files": ["/path/to/uploaded/file1.png", "/path/to/uploaded/file2.png"],
    "generated_file": "/path/to/generated_image.png",
    "output_dir": "/custom/output/directory"
}

Upscale generated images

Setup for upscaling

Preliminaries to get permissions sorted out:

$ gcloud init
$ gcloud auth login
$ gcloud auth application-default login
$ gcloud services enable aiplatform.googleapis.com

Upscale a specific image

$ poetry run python nano_api/upscale.py \
    --input generated_image.png \
    --scale 4 \
    --project-id my-gcp-project \
    --location us-central1

Upscale parameters

  • --input: Input image file to upscale (required)
  • --scale: Upscale factor, 2 or 4 (default: 2)
  • --project-id: Google Cloud Project ID (defaults to value in conf.py)
  • --location: Google Cloud region (defaults to value in conf.py)

Features

  • Image Generation: Generate images from text prompts using Gemini 2.5 Flash Image Preview
  • Multi-image Support: Use multiple reference images for generation
  • Automatic Upscaling: Optional 2x or 4x upscaling using Google Cloud Vertex AI
  • Flexible Output: Specify custom output directories and filenames
  • Error Handling: Comprehensive error logging and diagnostic information
  • Web API: RESTful API for integration with other applications
  • Command Line Interface: Full-featured CLI for batch processing and automation

Error Handling

The application provides detailed error logging when image generation fails:

  • Safety filter violations with specific categories and probability levels
  • API response diagnostics including token usage and finish reasons
  • File upload details with metadata (size, MIME type, expiration times)
  • Comprehensive error messages for troubleshooting

Testing

Run the comprehensive test suite:

# Run all tests
$ poetry run pytest tests/ -v

# Run specific test categories
$ poetry run pytest tests/unit/ -v
$ poetry run pytest tests/integration/ -v

Development

Code Quality Tools

This project includes comprehensive code quality and security tools:

Linting and Code Style

# Check code style with flake8
$ poetry run flake8 nano_api tests

# Run pylint for comprehensive code analysis
$ poetry run pylint nano_api/ tests/

# Run mypy for static type checking
$ poetry run mypy nano_api/

Security Analysis

# Run security analysis with bandit
$ poetry run bandit -r nano_api/

# Bandit configuration excludes test files automatically
# See .bandit file for configuration details

Pre-commit Workflow

Before committing code, run all quality checks:

# 1. Run tests
$ poetry run pytest

# 2. Check code style
$ poetry run flake8 nano_api tests

# 3. Run pylint analysis
$ poetry run pylint nano_api/ tests/

# 4. Run type checking
$ poetry run mypy nano_api/

# 5. Run security analysis
$ poetry run bandit -r nano_api/

Configuration Files

  • .pylintrc - Pylint configuration for code quality standards
  • .flake8 - Flake8 configuration for PEP 8 compliance
  • .bandit - Bandit security scanner configuration
  • .gitlab-ci.yml - CI/CD pipeline configuration
  • CLAUDE.md - Development guidelines for AI assistance

CI/CD Pipeline

The GitLab CI/CD pipeline automatically runs on every push and includes:

  1. Setup: Project structure validation and dependency installation
  2. Tests: Complete test suite execution (56 tests)
  3. Code Quality:
    • Flake8 style checking
    • Pylint comprehensive analysis
    • MyPy static type checking
    • Bandit security scanning

All quality gates must pass for the pipeline to succeed, ensuring consistent code quality and security.

Security Best Practices

  • Flask Debug Mode: Controlled via FLASK_DEBUG environment variable (disabled by default)
  • Secret Management: API keys stored in environment variables, never hardcoded
  • Test Isolation: Security scanning excludes test files with mock credentials
  • Dependency Management: Regular dependency updates via Poetry
  • Automated Security Scanning: Bandit security analysis runs on every push via GitLab CI/CD

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

stable_delusion-0.1.0.tar.gz (41.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

stable_delusion-0.1.0-py3-none-any.whl (55.5 kB view details)

Uploaded Python 3

File details

Details for the file stable_delusion-0.1.0.tar.gz.

File metadata

  • Download URL: stable_delusion-0.1.0.tar.gz
  • Upload date:
  • Size: 41.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.10.6 Linux/5.15.0-153-generic

File hashes

Hashes for stable_delusion-0.1.0.tar.gz
Algorithm Hash digest
SHA256 0c0696b72ca1c4d15c16f51d8f95db634942b7c1abe35d44a12d73837983b13a
MD5 6a5bb8972fd48ee0b5cecbd0921fef24
BLAKE2b-256 d55282efde88b5597d4df504eef36610a5cab48f2c36543914c980c89e89ea67

See more details on using hashes here.

File details

Details for the file stable_delusion-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: stable_delusion-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 55.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.7.1 CPython/3.10.6 Linux/5.15.0-153-generic

File hashes

Hashes for stable_delusion-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0e6407f57e6ca1282c38ec0601c56d876ee5b01d14a4a6cecdacc002310e7d4a
MD5 c43aa27404a5fa839e99b206a43afc6e
BLAKE2b-256 1895d4dd0204d11dd7beff18b6b609ee7eb5dc8795b54a9f6df03ab4e7408b9b

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page