A production-ready MCP server for AI-powered image generation using Gemini 3.1 Flash Image (Nano Banana 2, default), Gemini 3 Pro Image, and Gemini 2.5 Flash Image with intelligent model selection
Project description
Nano Banana MCP Server ๐
A production-ready Model Context Protocol (MCP) server that provides AI-powered image generation capabilities through Google's Gemini models with intelligent model selection.
โญ NEW: Nano Banana 2 โ Gemini 3.1 Flash Image! ๐๐
Nano Banana 2 (gemini-3.1-flash-image-preview) is now the default model โ delivering Pro-level quality at Flash speed:
- ๐ Flash Speed + 4K Quality: Up to 3840px at Gemini 2.5 Flash latency
- ๐ Google Search Grounding: Real-world knowledge for factually accurate images
- ๐ฏ Subject Consistency: Up to 5 characters and 14 objects per scene
- โ๏ธ Precision Text Rendering: Crystal-clear text placement in images
- ๐ Gemini 3 Pro Image still available for maximum reasoning depth
โจ Features
- ๐จ Multi-Model AI Image Generation: Three Gemini models with intelligent automatic selection
- ๐ Gemini 3.1 Flash Image (NB2): Default model โ 4K resolution at Flash speed with grounding
- ๐ Gemini 3 Pro Image: Maximum reasoning depth for the most complex compositions
- โก Gemini 2.5 Flash Image: Legacy Flash model for high-volume rapid prototyping
- ๐ค Smart Model Selection: Automatically routes to NB2 or Pro based on your prompt
- ๐ Aspect Ratio Control โญ NEW: Specify output dimensions (1:1, 16:9, 9:16, 21:9, and more)
- ๐ Smart Templates: Pre-built prompt templates for photography, design, and editing
- ๐ File Management: Upload and manage files via Gemini Files API
- ๐ Resource Discovery: Browse templates and file metadata through MCP resources
- ๐ก๏ธ Production Ready: Comprehensive error handling, logging, and validation
- โก High Performance: Optimized architecture with intelligent caching
๐ Quick Start
Prerequisites
- Google Gemini API Key - Get one free here
- Python 3.11+ (for development only)
Installation
Option 1: From MCP Registry (Recommended) This server is available in the Model Context Protocol Registry. Search for "nanobanana" or use the MCP name below with your MCP client.
mcp-name: io.github.zhongweili/nanobanana-mcp-server
Option 2: Using uvx
uvx nanobanana-mcp-server@latest
Option 3: Using pip
pip install nanobanana-mcp-server
๐ง Configuration
Authentication Methods
Nano Banana supports two authentication methods via NANOBANANA_AUTH_METHOD:
- API Key (
api_key): UsesGEMINI_API_KEY. Best for local development and simple deployments. - Vertex AI ADC (
vertex_ai): Uses Google Cloud Application Default Credentials. Best for production on Google Cloud (Cloud Run, GKE, GCE). - Automatic (
auto): Defaults to API Key if present, otherwise tries Vertex AI.
1. API Key Authentication (Default)
Set GEMINI_API_KEY environment variable.
2. Vertex AI Authentication (Google Cloud)
Required environment variables:
NANOBANANA_AUTH_METHOD=vertex_ai(orauto)GCP_PROJECT_ID=your-project-idGCP_REGION=us-central1(default)
Prerequisites:
- Enable Vertex AI API:
gcloud services enable aiplatform.googleapis.com - Grant IAM Role:
roles/aiplatform.userto the service account.
Claude Desktop
Option 1: Using Published Server (Recommended)
Add to your claude_desktop_config.json:
{
"mcpServers": {
"nanobanana": {
"command": "uvx",
"args": ["nanobanana-mcp-server@latest"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
Option 2: Using Local Source (Development)
If you are running from source code, point to your local installation:
{
"mcpServers": {
"nanobanana-local": {
"command": "uv",
"args": ["run", "python", "-m", "nanobanana_mcp_server.server"],
"cwd": "/absolute/path/to/nanobanana-mcp-server",
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
Option 3: Using Vertex AI (ADC)
To authenticate with Google Cloud Application Default Credentials (instead of an API Key):
{
"mcpServers": {
"nanobanana-adc": {
"command": "uvx",
"args": ["nanobanana-mcp-server@latest"],
"env": {
"NANOBANANA_AUTH_METHOD": "vertex_ai",
"GCP_PROJECT_ID": "your-project-id",
"GCP_REGION": "us-central1"
}
}
}
}
Configuration file locations:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Claude Code (VS Code Extension)
Install and configure in VS Code:
- Install the Claude Code extension
- Open Command Palette (
Cmd/Ctrl + Shift + P) - Run "Claude Code: Add MCP Server"
- Configure:
{ "name": "nanobanana", "command": "uvx", "args": ["nanobanana-mcp-server@latest"], "env": { "GEMINI_API_KEY": "your-gemini-api-key-here" } }
Cursor
Add to Cursor's MCP configuration:
{
"mcpServers": {
"nanobanana": {
"command": "uvx",
"args": ["nanobanana-mcp-server@latest"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
Continue.dev (VS Code/JetBrains)
Add to your config.json:
{
"mcpServers": [
{
"name": "nanobanana",
"command": "uvx",
"args": ["nanobanana-mcp-server@latest"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
]
}
Open WebUI
Configure in Open WebUI settings:
{
"mcp_servers": {
"nanobanana": {
"command": ["uvx", "nanobanana-mcp-server@latest"],
"env": {
"GEMINI_API_KEY": "your-gemini-api-key-here"
}
}
}
}
Gemini CLI / Generic MCP Client
# Set environment variable
export GEMINI_API_KEY="your-gemini-api-key-here"
# Run server in stdio mode
uvx nanobanana-mcp-server@latest
# Or with pip installation
python -m nanobanana_mcp_server.server
๐ค Model Selection
Nano Banana supports three Gemini models with intelligent automatic selection:
๐ NB2 โ Nano Banana 2 (Gemini 3.1 Flash Image) โญ DEFAULT
Flash speed with Pro-level quality โ the best of both worlds
- Quality: Production-ready 4K output
- Resolution: Up to 4K (3840px)
- Speed: ~2-4 seconds per image (Flash-class latency)
- Special Features:
- ๐ Google Search Grounding: Real-world knowledge for factually accurate images
- ๐ฏ Subject Consistency: Up to 5 characters and 14 objects per scene
- โ๏ธ Precision Text Rendering: Clear, well-placed text in images
- Best for: Almost everything โ production assets, marketing, photography, text overlays
- model_tier:
"nb2"(or"auto"โ NB2 is the auto default)
๐ Pro Model โ Nano Banana Pro (Gemini 3 Pro Image)
Maximum reasoning depth for the most demanding compositions
- Quality: Highest available
- Resolution: Up to 4K (3840px)
- Speed: ~5-8 seconds per image
- Special Features:
- ๐ง Advanced Reasoning: Configurable thinking levels (LOW/HIGH)
- ๐ Google Search Grounding: Real-world knowledge integration
- ๐ Media Resolution Control: Fine-tune vision processing detail
- Best for: Complex narrative scenes, intricate compositions, maximum reasoning required
- model_tier:
"pro"
โก Flash Model (Gemini 2.5 Flash Image)
Legacy model for high-volume rapid iteration
- Speed: Very fast (2-3 seconds)
- Resolution: Up to 1024px
- Best for: High-volume generation, quick drafts where 4K is not needed
- model_tier:
"flash"
๐ค Automatic Selection (Recommended)
By default, the server uses AUTO mode which routes to NB2 unless Pro's deeper reasoning is clearly needed:
Pro Model Selected When:
- Strong quality keywords: "4K", "professional", "production", "high-res", "HD"
- High thinking level requested:
thinking_level="HIGH" - Multi-image conditioning with multiple input images
NB2 Model Selected When (default):
- Standard requests, everyday image generation
- Speed keywords: "quick", "draft", "sketch", "rapid"
- High-volume batch generation (
n > 2)
Usage Examples
# Automatic selection (recommended) โ routes to NB2 by default
"A cat sitting on a windowsill" # โ NB2 (default)
"Quick sketch of a cat" # โ NB2 (speed keyword, NB2 is fast enough)
"Professional 4K product photo" # โ Pro (strong quality keywords)
# Explicit NB2 selection
generate_image(
prompt="Product photo on white background",
model_tier="nb2", # Nano Banana 2 (Flash speed + 4K)
resolution="4k",
enable_grounding=True
)
# Leverage Nano Banana Pro for complex reasoning
generate_image(
prompt="Cinematic scene: three characters in a tense standoff at dusk",
model_tier="pro", # Pro for deep reasoning
resolution="4k",
thinking_level="HIGH", # Enhanced reasoning
enable_grounding=True
)
# Legacy Flash for high-volume drafts
generate_image(
prompt="Simple icon",
model_tier="flash" # Fast 1024px generation
)
# Control aspect ratio for different formats โญ NEW!
generate_image(
prompt="Cinematic landscape at sunset",
aspect_ratio="21:9" # Ultra-wide cinematic format
)
generate_image(
prompt="Instagram post about coffee",
aspect_ratio="1:1" # Square format for social media
)
generate_image(
prompt="YouTube thumbnail design",
aspect_ratio="16:9" # Standard video format
)
generate_image(
prompt="Mobile wallpaper of mountain vista",
aspect_ratio="9:16" # Portrait format for phones
)
๐ Aspect Ratio Control
Control the output image dimensions with the aspect_ratio parameter:
Supported Aspect Ratios:
1:1- Square (Instagram, profile pictures)4:3- Classic photo format3:4- Portrait orientation16:9- Widescreen (YouTube thumbnails, presentations)9:16- Mobile portrait (phone wallpapers, stories)21:9- Ultra-wide cinematic2:3,3:2,4:5,5:4- Various photo formats
# Examples for different use cases
generate_image(
prompt="Product showcase for e-commerce",
aspect_ratio="3:4", # Portrait format, good for product pages
model_tier="pro"
)
generate_image(
prompt="Social media banner for Facebook",
aspect_ratio="16:9" # Landscape banner format
)
Note: Aspect ratio works with both Flash and Pro models. For best results with specific aspect ratios at high resolution, use the Pro model with resolution="4k".
๐ Output Path Control โญ NEW!
Control where generated images are saved with the output_path parameter:
Three modes of operation:
- Specific file path - Save to an exact file location:
generate_image(
prompt="A beautiful sunset",
output_path="/path/to/sunset.png" # Exact file location
)
- Directory path - Use auto-generated filename in a specific directory:
generate_image(
prompt="Product photo",
output_path="/path/to/products/" # Trailing slash indicates directory
)
- Default location - Uses IMAGE_OUTPUT_DIR or ~/nanobanana-images:
generate_image(
prompt="Random image"
# output_path defaults to None
)
Multiple images (n > 1): When generating multiple images with a file path, images are automatically numbered:
- First image:
/path/to/image.png - Second image:
/path/to/image_2.png - Third image:
/path/to/image_3.png
Precedence Rules:
output_pathparameter (if provided) - highest priorityIMAGE_OUTPUT_DIRenvironment variable~/nanobanana-images(default fallback)
# Save to specific location with Pro model
generate_image(
prompt="Professional headshot",
model_tier="pro",
output_path="/Users/me/photos/headshot.png"
)
# Save multiple images to a directory
generate_image(
prompt="Product variations",
n=4,
output_path="/path/to/products/" # Each gets unique filename
)
โ๏ธ Environment Variables
Configuration options:
# Authentication (Required)
# Method 1: API Key
GEMINI_API_KEY=your-gemini-api-key-here
# Method 2: Vertex AI (Google Cloud)
NANOBANANA_AUTH_METHOD=vertex_ai
GCP_PROJECT_ID=your-project-id
GCP_REGION=us-central1
# Model Selection (optional)
NANOBANANA_MODEL=auto # Options: flash, nb2, pro, auto (default: auto โ nb2)
# Optional
IMAGE_OUTPUT_DIR=/path/to/image/directory # Default: ~/nanobanana-images
LOG_LEVEL=INFO # DEBUG, INFO, WARNING, ERROR
LOG_FORMAT=standard # standard, json, detailed
๐ Troubleshooting
Common Issues
"GEMINI_API_KEY not set"
- Add your API key to the MCP server configuration in your client
- Get a free API key at Google AI Studio
"Server failed to start"
- Ensure you're using the latest version:
uvx nanobanana-mcp-server@latest - Check that your client supports MCP (Claude Desktop 0.10.0+)
"Permission denied" errors
- The server creates images in
~/nanobanana-imagesby default - Ensure write permissions to your home directory
Development Setup
For local development:
# Clone repository
git clone https://github.com/zhongweili/nanobanana-mcp-server.git
cd nanobanana-mcp-server
# Install with uv
uv sync
# Set environment
export GEMINI_API_KEY=your-api-key-here
# Run locally
uv run python -m nanobanana_mcp_server.server
๐ License
MIT License - see LICENSE for details.
๐ Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file nanobanana_mcp_server-0.4.2.tar.gz.
File metadata
- Download URL: nanobanana_mcp_server-0.4.2.tar.gz
- Upload date:
- Size: 259.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
880e80eb2623200d5e657657690d4d1194222e017f9e0273a19f6a9e4b3e6f80
|
|
| MD5 |
41ff041bcdb7f2dc48f3afa4bdab6975
|
|
| BLAKE2b-256 |
0ce4e7d886bb79efccc145c601b4a6fc871e1dfb701807be068690fd0c392034
|
File details
Details for the file nanobanana_mcp_server-0.4.2-py3-none-any.whl.
File metadata
- Download URL: nanobanana_mcp_server-0.4.2-py3-none-any.whl
- Upload date:
- Size: 87.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
68efe0828d62d644048e7a2021f4e93b7ef8dfc7ee044c06cf4b52070a130489
|
|
| MD5 |
3944e98c23de9dbad4be6a7ff2684dfe
|
|
| BLAKE2b-256 |
6c219ac281d0bfe59e9b76dfd8e48a850225934cf618019ad53596d8bba60be5
|