MCP server that bridges Claude Code to Gemini CLI for workflow tasks like codebase analysis, specification creation, and code review
Project description
Gemini Workflow Bridge MCP Server
MCP server that bridges Claude Code to Gemini CLI for workflow tasks like codebase analysis, specification creation, and code review, leveraging Gemini's massive 2M token context window and cost-effectiveness for read-heavy operations.
Overview
This MCP server extends Claude Code's capabilities by providing tools that delegate specific workflow tasks to Google's Gemini 2.0 Flash model. It's designed to optimize your development workflow by using each AI model's strengths:
- Gemini: Heavy context loading, codebase analysis, spec generation (2M token window)
- Claude Code: Precise code editing, implementation, and orchestration
Why CLI-Based?
CLI-Only Design (v1.0.0+): This server uses the Gemini CLI instead of API calls. Key benefits:
- Zero API Costs: Uses your existing Gemini Code Assist subscription
- Simple Auth: Reuses your CLI credentials, no API key management
- No Extra Setup: If you have Gemini CLI installed, you're ready to go
- Same Power: Access to all Gemini models including 2.0 Flash
Perfect for developers who already have Gemini Code Assist!
Features
Tools
analyze_codebase_with_gemini- Analyze large codebases using Gemini's 2M token contextcreate_specification_with_gemini- Generate detailed technical specificationsreview_code_with_gemini- Comprehensive code review with multiple focus areasgenerate_documentation_with_gemini- Create documentation with full codebase contextask_gemini- General-purpose queries with optional codebase context
Resources
workflow://specs/{name}- Access saved specificationsworkflow://reviews/{name}- Access saved code reviewsworkflow://context/{name}- Access cached codebase analysis
Installation
Prerequisites
- Python 3.11+
- Gemini CLI installed and authenticated
Step 1: Install Gemini CLI
npm install -g @google/gemini-cli
Step 2: Authenticate Gemini CLI
gemini
# Follow the authentication prompts
# Your credentials will be cached automatically
Step 3: Install MCP Server via pip
pip install hitoshura25-gemini-workflow-bridge
Install via uvx (recommended)
uvx hitoshura25-gemini-workflow-bridge
Configuration
Verify Gemini CLI is Ready
# Check CLI is installed
gemini --version
# Should show: 0.13.0 or higher
# Test CLI works
echo "What is 2+2?" | gemini
# Should return a response from Gemini
Optional: Configure Model and Context Cache
Create a .env file (optional):
# NO API KEY NEEDED!
# Use "auto" to let the CLI choose the best model automatically
# Pro models for complex tasks, Flash for simple/fast tasks
GEMINI_MODEL=auto
# Or specify a specific model:
# GEMINI_MODEL=gemini-2.0-flash
# GEMINI_MODEL=gemini-1.5-pro
# Context Cache TTL
# How long to cache codebase context before reloading (in minutes)
# Default: 30 minutes
CONTEXT_CACHE_TTL_MINUTES=30
DEFAULT_SPEC_DIR=./specs
DEFAULT_REVIEW_DIR=./reviews
DEFAULT_CONTEXT_DIR=./.workflow-context
See .env.example for all available options.
Configure Claude Code
Add the server to your Claude Code MCP configuration (~/.claude/config.json or workspace .claude/config.json):
Using uvx (recommended):
{
"mcpServers": {
"gemini-workflow": {
"command": "uvx",
"args": ["hitoshura25-gemini-workflow-bridge"]
}
}
}
Or using pip:
{
"mcpServers": {
"gemini-workflow": {
"command": "python",
"args": ["-m", "hitoshura25_gemini_workflow_bridge.server"]
}
}
}
Note: No API key needed in config! The MCP server uses your Gemini CLI credentials automatically.
Usage Examples
How It Works: Automatic Context Reuse
Key Features:
- ✅ Automatic Context Loading: First tool call automatically loads and analyzes your codebase
- ✅ Automatic Context Reuse: Subsequent calls within 30 minutes reuse the same context (configurable via
CONTEXT_CACHE_TTL_MINUTES) - ✅ Zero Manual Management: No
context_idparameters to track or pass around - ✅ Smart Cache Expiration: Context expires after TTL, automatically reloads when needed
Workflow Example:
User: "Create a spec for adding 2FA authentication"
Claude Code:
[Calls: create_specification_with_gemini({
feature_description: "2FA authentication"
})]
Behind the scenes:
1. ✅ Automatically loads codebase (*.py, *.js, *.ts, etc.)
2. ✅ Caches context for 30 minutes (default TTL)
3. ✅ Generates context-aware specification
User: "Now review my auth.py file"
Claude Code:
[Calls: review_code_with_gemini({
files: ["auth.py", "middleware.py"]
})]
Behind the scenes:
1. ✅ Automatically reuses cached context from previous call (fast!)
2. ✅ No reload needed - same codebase understanding
3. ✅ Performs context-aware code review
Result: Fast, seamless workflow with automatic context management!
Configure Cache TTL (Optional):
# .env file
# Default: 30 minutes
CONTEXT_CACHE_TTL_MINUTES=60 # Keep context for 1 hour
Example 1: Analyze Codebase
Claude Code can use this to analyze your codebase before implementing a feature:
User: "I want to add Redis caching to the product catalog API"
Claude Code (internally):
[Calls: analyze_codebase_with_gemini({
focus_description: "product catalog API structure and caching opportunities",
file_patterns: ["*.py", "*.js"],
directories: ["src/api", "src/services"]
})]
Response:
{
"analysis": "The product catalog API is implemented in...",
"architecture_summary": "Microservices architecture with...",
"relevant_files": ["src/api/catalog.py", "src/services/product_service.py"],
"cached_context_id": "ctx_abc123"
}
Example 2: Generate Specification
User: "Create a detailed spec for the Redis caching feature"
Claude Code (internally):
[Calls: create_specification_with_gemini({
feature_description: "Redis caching for product catalog API",
context_id: "ctx_abc123", // Reuse cached analysis
spec_template: "standard"
})]
Response:
{
"spec_path": "./specs/redis-caching-for-product-catalog-api-spec.md",
"implementation_tasks": [
{"task": "Install redis-py dependency", "order": 1},
{"task": "Create cache middleware", "order": 2},
...
],
"estimated_complexity": "medium"
}
Example 3: Code Review
User: "Review my changes before I commit"
Claude Code (internally):
[Calls: review_code_with_gemini({
review_focus: ["security", "performance"],
spec_path: "./specs/redis-caching-spec.md"
})]
Response:
{
"review_path": "./reviews/2025-01-10-123456-review.md",
"has_blocking_issues": false,
"summary": "Code looks good overall. Consider adding connection pooling."
}
Example 4: Generate Documentation
User: "Generate API documentation for the catalog service"
Claude Code (internally):
[Calls: generate_documentation_with_gemini({
documentation_type: "api",
scope: "product catalog service",
include_examples: true
})]
Response:
{
"doc_path": "./docs/api-documentation.md",
"word_count": 2500
}
Example 5: Ask Gemini
User: "Ask Gemini about the best caching strategy for this codebase"
Claude Code (internally):
[Calls: ask_gemini({
prompt: "What's the best caching strategy for this product catalog API?",
include_codebase_context: true,
temperature: 0.7
})]
Response:
{
"response": "Based on your codebase architecture, I recommend...",
"context_used": true
}
Tool Reference
analyze_codebase_with_gemini
Analyze codebase using Gemini's 2M token context window.
Parameters:
focus_description(string, required): What to focus on in the analysisdirectories(array, optional): Directories to analyzefile_patterns(array, optional): File patterns to include (default:["*.py", "*.js", "*.ts", "*.java", "*.go"])exclude_patterns(array, optional): Patterns to exclude (default:["node_modules/", "dist/", "build/"])
Returns:
{
"analysis": "Detailed analysis text",
"architecture_summary": "High-level overview",
"relevant_files": ["file1.py", "file2.js"],
"patterns_identified": ["pattern1", "pattern2"],
"integration_points": ["point1", "point2"],
"cached_context_id": "ctx_abc123"
}
create_specification_with_gemini
Generate detailed technical specification with automatic codebase context.
Parameters:
feature_description(string, required): What feature to specifyspec_template(string, optional): Template to use ("standard", "minimal", "comprehensive")output_path(string, optional): Where to save the spec
Returns:
{
"spec_path": "./specs/feature-spec.md",
"spec_content": "Full markdown content",
"implementation_tasks": [{"task": "...", "order": 1}],
"estimated_complexity": "medium",
"files_to_modify": ["file1.py"],
"files_to_create": ["file2.py"]
}
Note: Context is automatically cached for reuse in subsequent tool calls within the session.
review_code_with_gemini
Comprehensive code review with automatic codebase context.
Parameters:
files(array, optional): Files to review (default: git diff)review_focus(array, optional): Focus areas (default:["security", "performance", "best-practices", "testing"])spec_path(string, optional): Specification to review againstoutput_path(string, optional): Where to save review
Returns:
{
"review_path": "./reviews/2025-01-10-review.md",
"review_content": "Full markdown review",
"issues_found": [{
"severity": "high",
"category": "security",
"file": "auth.py",
"line": 42,
"issue": "Potential SQL injection",
"suggestion": "Use parameterized queries"
}],
"has_blocking_issues": true,
"summary": "Review summary",
"recommendations": ["Add input validation", "Use ORM"]
}
Note: Context is automatically cached for reuse in subsequent tool calls within the session.
generate_documentation_with_gemini
Generate comprehensive documentation with automatic codebase context.
Parameters:
documentation_type(string, required): Type ("api", "architecture", "user-guide", "readme", "contributing")scope(string, required): What to documentoutput_path(string, optional): Where to save documentationinclude_examples(boolean, optional): Include code examples (default: true)
Returns:
{
"doc_path": "./docs/api-documentation.md",
"doc_content": "Full markdown documentation",
"sections": ["overview", "endpoints", "examples"],
"word_count": 2500
}
Note: Context is automatically cached for reuse in subsequent tool calls within the session.
ask_gemini
General-purpose Gemini query with optional codebase context.
Parameters:
prompt(string, required): Question or taskinclude_codebase_context(boolean, optional): Load full codebase (default: false). If true, automatically loads or reuses cached context.temperature(number, optional): Generation temperature 0.0-1.0 (default: 0.7)
Returns:
{
"response": "Gemini's response",
"context_used": true,
"token_count": 150000
}
Note: Context is automatically cached for reuse in subsequent tool calls within the session.
Architecture
┌─────────────────────────────────────────────────────┐
│ Claude Code CLI │
│ (Orchestrator - makes all decisions) │
│ │
│ "Let me analyze the codebase with Gemini..." │
│ [Invokes: analyze_codebase_with_gemini] │
└──────────────────────┬──────────────────────────────┘
│ MCP Protocol
↓
┌─────────────────────────────────────────────────────┐
│ MCP Server: gemini-workflow-bridge │
│ │
│ Tools: │
│ • analyze_codebase_with_gemini │
│ • create_specification_with_gemini │
│ • review_code_with_gemini │
│ • generate_documentation_with_gemini │
│ • ask_gemini │
│ │
│ Resources: │
│ • workflow://specs/{feature-name} │
│ • workflow://reviews/{review-id} │
│ • workflow://context/{project-name} │
└──────────────────────┬──────────────────────────────┘
│ Gemini API
↓
┌─────────────────────────────────────────────────────┐
│ Google Gemini 2.0 Flash │
│ (2M token context, fast, cost-effective) │
└─────────────────────────────────────────────────────┘
Development
Setup Development Environment
# Clone the repository
git clone https://github.com/hitoshura25/gemini-workflow-bridge-mcp
cd gemini-workflow-bridge-mcp
# Create virtual environment
python -m venv venv
source venv/bin/activate # or venv\Scripts\activate on Windows
# Install in development mode
pip install -e ".[dev]"
# Copy environment template (optional)
cp .env.example .env
# Edit .env to customize model or directories if needed
Run Tests
pytest
Run Linting
ruff check .
mypy .
License
Apache-2.0 License - see LICENSE for details.
Credits
- Built with MCP
- Powered by Google Gemini 2.0 Flash
- Generated with mcp-server-generator
Support
- Issues: GitHub Issues
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file hitoshura25_gemini_workflow_bridge-0.2.0.tar.gz.
File metadata
- Download URL: hitoshura25_gemini_workflow_bridge-0.2.0.tar.gz
- Upload date:
- Size: 81.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b388b33b0bf9d98f5e42f058fc2b6e4d7a3cc2f7809dbbdc524fa2463c7f366c
|
|
| MD5 |
3ae7d1801d3493c727324d59b172587d
|
|
| BLAKE2b-256 |
23a69b3a84fb88074a2e404bdf6ffb41dd04f42f8b926e5d7ef4087aa98d7535
|
Provenance
The following attestation bundles were made for hitoshura25_gemini_workflow_bridge-0.2.0.tar.gz:
Publisher:
release.yml on hitoshura25/gemini-workflow-bridge-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
hitoshura25_gemini_workflow_bridge-0.2.0.tar.gz -
Subject digest:
b388b33b0bf9d98f5e42f058fc2b6e4d7a3cc2f7809dbbdc524fa2463c7f366c - Sigstore transparency entry: 698410456
- Sigstore integration time:
-
Permalink:
hitoshura25/gemini-workflow-bridge-mcp@0216c86e46b46d8b537993633dfed77a33bba1a0 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/hitoshura25
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@0216c86e46b46d8b537993633dfed77a33bba1a0 -
Trigger Event:
workflow_dispatch
-
Statement type:
File details
Details for the file hitoshura25_gemini_workflow_bridge-0.2.0-py3-none-any.whl.
File metadata
- Download URL: hitoshura25_gemini_workflow_bridge-0.2.0-py3-none-any.whl
- Upload date:
- Size: 38.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
de1b7f5dc91c0190a680b6aeb53e2fc28f66a07b62946c237f3e152f71fe323b
|
|
| MD5 |
6467355c2b41a351db10b97b5bf7bb34
|
|
| BLAKE2b-256 |
4d2513ce5835f0dd381a468bd5aedbac4b5e08cc143cc00c4d6625c126f9936a
|
Provenance
The following attestation bundles were made for hitoshura25_gemini_workflow_bridge-0.2.0-py3-none-any.whl:
Publisher:
release.yml on hitoshura25/gemini-workflow-bridge-mcp
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
hitoshura25_gemini_workflow_bridge-0.2.0-py3-none-any.whl -
Subject digest:
de1b7f5dc91c0190a680b6aeb53e2fc28f66a07b62946c237f3e152f71fe323b - Sigstore transparency entry: 698410471
- Sigstore integration time:
-
Permalink:
hitoshura25/gemini-workflow-bridge-mcp@0216c86e46b46d8b537993633dfed77a33bba1a0 -
Branch / Tag:
refs/heads/main - Owner: https://github.com/hitoshura25
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
release.yml@0216c86e46b46d8b537993633dfed77a33bba1a0 -
Trigger Event:
workflow_dispatch
-
Statement type: