Skip to main content

MCP server that bridges Claude Code to Gemini CLI for workflow tasks like codebase analysis, specification creation, and code review

Project description

Gemini Workflow Bridge MCP

Gemini as Context Compression Engine + Claude as Reasoning Engine = A-Grade Results

Overview

This MCP is a context compression engine that optimally leverages both Claude Code and Gemini's strengths.

Key Features

  • Quality: A-grade specifications (Gemini provides facts, Claude does reasoning)
  • Cost: 47-61% reduction in Claude tokens (expensive operations move to free Gemini tier)
  • Compression: 174:1 token compression ratio (50K tokens → 300 token summaries)
  • DX: Auto-generated workflows and slash commands for common tasks

Architecture

┌─────────────────────────────────────────┐
│   Claude Code (Reasoning Engine)        │
│   - Superior planning & specifications  │
│   - Precise code editing                │
│   - A-grade output quality              │
└──────────────┬──────────────────────────┘
               │ MCP Protocol
               ↓
┌─────────────────────────────────────────┐
│   MCP Server (Compression Layer)        │
│   - 50K tokens → 300 token summaries    │
│   - Fact extraction only                │
│   - Validation & consistency checks     │
└──────────────┬──────────────────────────┘
               │ Gemini CLI
               ↓
┌─────────────────────────────────────────┐
│   Gemini (Context Engine)               │
│   - 2M token window (free tier)         │
│   - Factual extraction only             │
│   - No opinions or planning             │
└─────────────────────────────────────────┘

Installation

Prerequisites

  1. Gemini CLI - Install and authenticate:

    npm install -g @google/gemini-cli
    gemini  # Follow authentication prompts
    
  2. Python 3.11+ with pip

Install the MCP Server

# Clone the repository
git clone https://github.com/hitoshura25/gemini-workflow-bridge-mcp
cd gemini-workflow-bridge-mcp

# Install dependencies
pip install -e .

Configure Claude Code

Add to your Claude Code MCP settings (typically claude_desktop_config.json):

{
  "mcpServers": {
    "gemini-workflow-bridge": {
      "command": "python",
      "args": ["-m", "hitoshura25_gemini_workflow_bridge"],
      "env": {
        "CONTEXT_CACHE_TTL_MINUTES": "30",
        "MAX_TOKENS_PER_ANSWER": "300",
        "TARGET_COMPRESSION_RATIO": "100"
      }
    }
  }
}

Quick Start

# 1. Extract facts about your codebase
query_codebase_tool(
    questions=["How is authentication implemented?"],
    scope="src/"
)
# Returns: Compressed facts with file:line references

# 2. Create specification using those facts (Claude does this)
# [Your reasoning creates A-grade spec here]

# 3. Validate specification
validate_against_codebase_tool(
    spec_content="...",
    validation_checks=["missing_files", "undefined_dependencies"]
)
# Returns: Completeness score, issues, suggestions

Documentation

Tools Overview

🔍 Tier 1: Fact Extraction

Tool Purpose Key Feature
query_codebase_tool() Multi-question analysis 174:1 compression ratio
find_code_by_intent_tool() Semantic search Returns summaries, not full code
trace_feature_tool() Follow execution flow Step-by-step with data flow
list_error_patterns_tool() Extract patterns Filtering at the edge

✅ Tier 2: Validation

Tool Purpose
validate_against_codebase_tool() Validate specs for completeness
check_consistency_tool() Verify pattern alignment

🚀 Tier 3: Workflow Automation

Tool Purpose
generate_feature_workflow_tool() Generate executable workflows
generate_slash_command_tool() Create custom slash commands

Example: Complete Feature Implementation

User: "Add Redis caching to product API"

# Step 1: Extract facts
→ query_codebase_tool(questions=[...])
← 52K tokens → 387 tokens (134:1 compression)

# Step 2: Claude creates A-grade spec using facts
→ [Your superior reasoning]
← High-quality specification

# Step 3: Validate spec
→ validate_against_codebase_tool(spec=...)
← Completeness: 92%, 1 minor issue

# Step 4: Implement
→ [Your precise code editing]

Result: ✅ A-grade spec, 61% token savings, 3.5 minutes

How It Works

Aspect Description
Spec Creation Claude generates A-grade specifications
Token Usage 3,100 Claude tokens (61% reduction vs traditional approaches)
Gemini Role Provides facts only
Claude Role Creates from scratch with facts
Quality A-grade
Workflows Auto-generated

Configuration

Key environment variables (see .env.example for all):

CONTEXT_CACHE_TTL_MINUTES=30     # Cache duration
MAX_TOKENS_PER_ANSWER=300        # Compression target
TARGET_COMPRESSION_RATIO=100     # Aim for 100:1
GEMINI_MODEL=auto                # or specific model

Usage Example

# 1. Get facts
facts = query_codebase_tool(questions=[...])

# 2. Create spec (Claude does this with superior reasoning)
spec = create_your_a_grade_spec(facts)

# 3. Validate
validate_against_codebase_tool(spec=spec)

Troubleshooting

"Gemini CLI not found"

npm install -g @google/gemini-cli

"Empty response from Gemini"

gemini --version  # Check installation
gemini            # Re-authenticate if needed

Development

# Install dev dependencies
pip install -e ".[dev]"

# Run tests
pytest

# Run with debug logging
DEBUG_MODE=true python -m hitoshura25_gemini_workflow_bridge

Project Structure

hitoshura25_gemini_workflow_bridge/
├── tools/           # 8 tools (Tier 1, 2, 3)
├── prompts/         # Strict fact extraction prompts
├── workflows/       # Workflow templates
├── utils/           # Token counting, prompt loading
├── server.py        # MCP server
└── generator.py     # Legacy implementations

Success Metrics

  • 61% cost reduction in Claude tokens
  • 174:1 compression ratio (50K → 300 tokens)
  • A-grade quality specifications
  • Progressive disclosure with workflows

Contributing

Contributions welcome! Please read:

  1. Implementation Plan
  2. Architecture Overview
  3. Submit PR with tests

License

MIT License - see LICENSE

Credits


Status: ✅ Production Ready Last Updated: November 15, 2025

🌟 Star us on GitHub if you find this useful!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hitoshura25_gemini_workflow_bridge-0.3.2.tar.gz (119.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

File details

Details for the file hitoshura25_gemini_workflow_bridge-0.3.2.tar.gz.

File metadata

File hashes

Hashes for hitoshura25_gemini_workflow_bridge-0.3.2.tar.gz
Algorithm Hash digest
SHA256 3677daa39ef34ca826deeecbd5ce2a3a1918e9cc03a348f864c162999441f282
MD5 278e2a52f090911f7534bb352ec13ea1
BLAKE2b-256 b69af770b809b1415309f0fda147b1d8bbfe788200290cd52b2b7b6717df9499

See more details on using hashes here.

Provenance

The following attestation bundles were made for hitoshura25_gemini_workflow_bridge-0.3.2.tar.gz:

Publisher: release.yml on hitoshura25/gemini-workflow-bridge-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file hitoshura25_gemini_workflow_bridge-0.3.2-py3-none-any.whl.

File metadata

File hashes

Hashes for hitoshura25_gemini_workflow_bridge-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 6799f010bd79b2bdb64c6927d204c523c744148f5745ea87bd5ba00ea884f9f0
MD5 cb2187de73f8fa1df947730abd1f7487
BLAKE2b-256 4875ada2816c01b7a0f3dc49b3b56ae790ce34cbccdcee06c180025191565f0e

See more details on using hashes here.

Provenance

The following attestation bundles were made for hitoshura25_gemini_workflow_bridge-0.3.2-py3-none-any.whl:

Publisher: release.yml on hitoshura25/gemini-workflow-bridge-mcp

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page