Skip to main content

MCP server that bridges Claude Code to Gemini CLI for workflow tasks like codebase analysis, specification creation, and code review

Project description

Gemini Workflow Bridge MCP Server

MCP server that bridges Claude Code to Gemini CLI for workflow tasks like codebase analysis, specification creation, and code review, leveraging Gemini's massive 2M token context window and cost-effectiveness for read-heavy operations.

Overview

This MCP server extends Claude Code's capabilities by providing tools that delegate specific workflow tasks to Google's Gemini 2.0 Flash model. It's designed to optimize your development workflow by using each AI model's strengths:

  • Gemini: Heavy context loading, codebase analysis, spec generation (2M token window)
  • Claude Code: Precise code editing, implementation, and orchestration

Why CLI-Based?

CLI-Only Design (v1.0.0+): This server uses the Gemini CLI instead of API calls. Key benefits:

  • Zero API Costs: Uses your existing Gemini Code Assist subscription
  • Simple Auth: Reuses your CLI credentials, no API key management
  • No Extra Setup: If you have Gemini CLI installed, you're ready to go
  • Same Power: Access to all Gemini models including 2.0 Flash

Perfect for developers who already have Gemini Code Assist!

Features

Tools

  1. analyze_codebase_with_gemini - Analyze large codebases using Gemini's 2M token context
  2. create_specification_with_gemini - Generate detailed technical specifications
  3. review_code_with_gemini - Comprehensive code review with multiple focus areas
  4. generate_documentation_with_gemini - Create documentation with full codebase context
  5. ask_gemini - General-purpose queries with optional codebase context

Resources

  • workflow://specs/{name} - Access saved specifications
  • workflow://reviews/{name} - Access saved code reviews
  • workflow://context/{name} - Access cached codebase analysis

Installation

Prerequisites

  • Python 3.11+
  • Gemini CLI installed and authenticated

Step 1: Install Gemini CLI

npm install -g @google/gemini-cli

Step 2: Authenticate Gemini CLI

gemini
# Follow the authentication prompts
# Your credentials will be cached automatically

Step 3: Install MCP Server via pip

pip install hitoshura25-gemini-workflow-bridge

Install via uvx (recommended)

uvx hitoshura25-gemini-workflow-bridge

Configuration

Verify Gemini CLI is Ready

# Check CLI is installed
gemini --version
# Should show: 0.13.0 or higher

# Test CLI works
echo "What is 2+2?" | gemini
# Should return a response from Gemini

Optional: Configure Model

Create a .env file (optional):

# NO API KEY NEEDED!
# Use "auto" to let the CLI choose the best model automatically
# Pro models for complex tasks, Flash for simple/fast tasks
GEMINI_MODEL=auto

# Or specify a specific model:
# GEMINI_MODEL=gemini-2.0-flash
# GEMINI_MODEL=gemini-1.5-pro

DEFAULT_SPEC_DIR=./specs
DEFAULT_REVIEW_DIR=./reviews
DEFAULT_CONTEXT_DIR=./.workflow-context

See .env.example for all available options.

Configure Claude Code

Add the server to your Claude Code MCP configuration (~/.claude/config.json or workspace .claude/config.json):

Using uvx (recommended):

{
  "mcpServers": {
    "gemini-workflow": {
      "command": "uvx",
      "args": ["hitoshura25-gemini-workflow-bridge"]
    }
  }
}

Or using pip:

{
  "mcpServers": {
    "gemini-workflow": {
      "command": "python",
      "args": ["-m", "hitoshura25_gemini_workflow_bridge.server"]
    }
  }
}

Note: No API key needed in config! The MCP server uses your Gemini CLI credentials automatically.

Usage Examples

Example 1: Analyze Codebase

Claude Code can use this to analyze your codebase before implementing a feature:

User: "I want to add Redis caching to the product catalog API"

Claude Code (internally):
[Calls: analyze_codebase_with_gemini({
  focus_description: "product catalog API structure and caching opportunities",
  file_patterns: ["*.py", "*.js"],
  directories: ["src/api", "src/services"]
})]

Response:
{
  "analysis": "The product catalog API is implemented in...",
  "architecture_summary": "Microservices architecture with...",
  "relevant_files": ["src/api/catalog.py", "src/services/product_service.py"],
  "cached_context_id": "ctx_abc123"
}

Example 2: Generate Specification

User: "Create a detailed spec for the Redis caching feature"

Claude Code (internally):
[Calls: create_specification_with_gemini({
  feature_description: "Redis caching for product catalog API",
  context_id: "ctx_abc123",  // Reuse cached analysis
  spec_template: "standard"
})]

Response:
{
  "spec_path": "./specs/redis-caching-for-product-catalog-api-spec.md",
  "implementation_tasks": [
    {"task": "Install redis-py dependency", "order": 1},
    {"task": "Create cache middleware", "order": 2},
    ...
  ],
  "estimated_complexity": "medium"
}

Example 3: Code Review

User: "Review my changes before I commit"

Claude Code (internally):
[Calls: review_code_with_gemini({
  review_focus: ["security", "performance"],
  spec_path: "./specs/redis-caching-spec.md"
})]

Response:
{
  "review_path": "./reviews/2025-01-10-123456-review.md",
  "has_blocking_issues": false,
  "summary": "Code looks good overall. Consider adding connection pooling."
}

Example 4: Generate Documentation

User: "Generate API documentation for the catalog service"

Claude Code (internally):
[Calls: generate_documentation_with_gemini({
  documentation_type: "api",
  scope: "product catalog service",
  include_examples: true
})]

Response:
{
  "doc_path": "./docs/api-documentation.md",
  "word_count": 2500
}

Example 5: Ask Gemini

User: "Ask Gemini about the best caching strategy for this codebase"

Claude Code (internally):
[Calls: ask_gemini({
  prompt: "What's the best caching strategy for this product catalog API?",
  include_codebase_context: true,
  temperature: 0.7
})]

Response:
{
  "response": "Based on your codebase architecture, I recommend...",
  "context_used": true
}

Tool Reference

analyze_codebase_with_gemini

Analyze codebase using Gemini's 2M token context window.

Parameters:

  • focus_description (string, required): What to focus on in the analysis
  • directories (array, optional): Directories to analyze
  • file_patterns (array, optional): File patterns to include (default: ["*.py", "*.js", "*.ts", "*.java", "*.go"])
  • exclude_patterns (array, optional): Patterns to exclude (default: ["node_modules/", "dist/", "build/"])

Returns:

{
  "analysis": "Detailed analysis text",
  "architecture_summary": "High-level overview",
  "relevant_files": ["file1.py", "file2.js"],
  "patterns_identified": ["pattern1", "pattern2"],
  "integration_points": ["point1", "point2"],
  "cached_context_id": "ctx_abc123"
}

create_specification_with_gemini

Generate detailed technical specification.

Parameters:

  • feature_description (string, required): What feature to specify
  • context_id (string, optional): Context ID from previous analysis
  • spec_template (string, optional): Template to use ("standard", "minimal", "comprehensive")
  • output_path (string, optional): Where to save the spec

Returns:

{
  "spec_path": "./specs/feature-spec.md",
  "spec_content": "Full markdown content",
  "implementation_tasks": [{"task": "...", "order": 1}],
  "estimated_complexity": "medium",
  "files_to_modify": ["file1.py"],
  "files_to_create": ["file2.py"]
}

review_code_with_gemini

Comprehensive code review.

Parameters:

  • files (array, optional): Files to review (default: git diff)
  • review_focus (array, optional): Focus areas (default: ["security", "performance", "best-practices", "testing"])
  • spec_path (string, optional): Specification to review against
  • output_path (string, optional): Where to save review

Returns:

{
  "review_path": "./reviews/2025-01-10-review.md",
  "review_content": "Full markdown review",
  "issues_found": [{
    "severity": "high",
    "category": "security",
    "file": "auth.py",
    "line": 42,
    "issue": "Potential SQL injection",
    "suggestion": "Use parameterized queries"
  }],
  "has_blocking_issues": true,
  "summary": "Review summary"
}

generate_documentation_with_gemini

Generate comprehensive documentation.

Parameters:

  • documentation_type (string, required): Type ("api", "architecture", "user-guide", "readme", "contributing")
  • scope (string, required): What to document
  • output_path (string, optional): Where to save documentation
  • include_examples (boolean, optional): Include code examples (default: true)

Returns:

{
  "doc_path": "./docs/api-documentation.md",
  "doc_content": "Full markdown documentation",
  "sections": ["overview", "endpoints", "examples"],
  "word_count": 2500
}

ask_gemini

General-purpose Gemini query.

Parameters:

  • prompt (string, required): Question or task
  • include_codebase_context (boolean, optional): Load full codebase (default: false)
  • context_id (string, optional): Reuse cached context
  • temperature (number, optional): Generation temperature 0.0-1.0 (default: 0.7)

Returns:

{
  "response": "Gemini's response",
  "context_used": true,
  "token_count": 150000
}

Architecture

┌─────────────────────────────────────────────────────┐
│                  Claude Code CLI                     │
│  (Orchestrator - makes all decisions)                │
│                                                      │
│  "Let me analyze the codebase with Gemini..."       │
│  [Invokes: analyze_codebase_with_gemini]            │
└──────────────────────┬──────────────────────────────┘
                       │ MCP Protocol
                       ↓
┌─────────────────────────────────────────────────────┐
│         MCP Server: gemini-workflow-bridge          │
│                                                      │
│  Tools:                                             │
│  • analyze_codebase_with_gemini                     │
│  • create_specification_with_gemini                 │
│  • review_code_with_gemini                          │
│  • generate_documentation_with_gemini               │
│  • ask_gemini                                       │
│                                                      │
│  Resources:                                         │
│  • workflow://specs/{feature-name}                  │
│  • workflow://reviews/{review-id}                   │
│  • workflow://context/{project-name}                │
└──────────────────────┬──────────────────────────────┘
                       │ Gemini API
                       ↓
┌─────────────────────────────────────────────────────┐
│              Google Gemini 2.0 Flash                │
│         (2M token context, fast, cost-effective)    │
└─────────────────────────────────────────────────────┘

Development

Setup Development Environment

# Clone the repository
git clone https://github.com/hitoshura25/gemini-workflow-bridge-mcp
cd gemini-workflow-bridge-mcp

# Create virtual environment
python -m venv venv
source venv/bin/activate  # or venv\Scripts\activate on Windows

# Install in development mode
pip install -e ".[dev]"

# Copy environment template (optional)
cp .env.example .env
# Edit .env to customize model or directories if needed

Run Tests

pytest

Run Linting

ruff check .
mypy .

License

Apache-2.0 License - see LICENSE for details.

Credits

Support

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hitoshura25_gemini_workflow_bridge-0.0.1.tar.gz (51.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file hitoshura25_gemini_workflow_bridge-0.0.1.tar.gz.

File metadata

File hashes

Hashes for hitoshura25_gemini_workflow_bridge-0.0.1.tar.gz
Algorithm Hash digest
SHA256 b91abed405f6829d44902d260e77990211169d2347c19332724db0d2a6e9a184
MD5 7fd23c7238ed6de2d080cd9e833e99cd
BLAKE2b-256 8025a2453c7fc8debb8cc9ad626b99185aac610f518050b0a0fdbe70d596ba59

See more details on using hashes here.

Provenance

The following attestation bundles were made for hitoshura25_gemini_workflow_bridge-0.0.1.tar.gz:

Publisher: release.yml on hitoshura25/gemini-workflow-bridge-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file hitoshura25_gemini_workflow_bridge-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for hitoshura25_gemini_workflow_bridge-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 51f3297d482b1eaf41c3fa1e1704e2939e9cd79aef9ed7cd3d892f3e86d8b40f
MD5 649a8250893a056452e16b2fa8a35228
BLAKE2b-256 93e841fbe2a8278ac2e92f56ed69cfb5e4b8ba9d9cea26960134898f23951618

See more details on using hashes here.

Provenance

The following attestation bundles were made for hitoshura25_gemini_workflow_bridge-0.0.1-py3-none-any.whl:

Publisher: release.yml on hitoshura25/gemini-workflow-bridge-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page