MCP server integrating GEPA (Genetic-Evolutionary Prompt Architecture) for automatic prompt optimization
Project description
I'm not quite sure this works as is 😂 You may need to ask the model you're working with to clean it up:
The GEPA MCP server isn't working. Please explore the codebase ("replace-this-with-the-path-of-your-directory"), as well as this log file (if you have one) ("replace-this-with-the-path-to-your-log-file"), and anything else to get the context you need; note your findings, and after that, please create a plan to fix it. Let me know when you're ready!
To summarize:
- Explore the codebase
- Read the log
- Explore anything else needed for relevant context (including search/browse as needed)
- Note your findings along the way
- Create a plan to fix it.
- Then [share your plan] or [go ahead and fix it]
- Note, I'm not sure if that 'fix' prompt will work; it may; but just an example.
GEPA MCP Server
- Thank you to the brilliant researchers who created this system;
- Check out the original research here: https://arxiv.org/abs/2507.19457
- As well as their repository for the official implementation of the algorithm: https://github.com/gepa-ai/gepa
Genetic-Evolutionary Prompt Architecture for Claude Desktop (or any MCP client) Research-backed automatic prompt optimization
A Model Context Protocol (MCP) server implementing the core GEPA (Genetic-Evolutionary Prompt Architecture) algorithm for automatic prompt optimization in Claude Desktop.
Key Research Benefits:
- 10-20% better prompts compared to reinforcement learning approaches
- 35x more efficient than traditional optimization methods
- Genetic-evolutionary approach using natural language reflection
🚀 Quick Installation
Prerequisites
- Python 3.10+
- Claude Desktop
- Gemini API key (free)
One-Command Setup
git clone https://github.com/developzir/gepa-mcp.git
cd gepa-mcp
./install.sh
The installer will:
- ✅ Install all dependencies automatically
- ✅ Safely merge with your existing Claude Desktop config
- ✅ Prompt for your Gemini API key
- ✅ Test the installation
🛠️ Three Core Tools
1. optimize_prompt - Core GEPA Algorithm
The original research implementation - Full genetic-evolutionary optimization
{
"tool": "optimize_prompt",
"seed_prompt": "Write a product description",
"training_examples": [
{
"input": "wireless headphones",
"expected_keywords": ["battery", "sound quality", "comfort", "features"]
},
{
"input": "smartphone",
"expected_keywords": ["performance", "camera", "display", "battery"]
}
],
"budget": 15
}
When to use: Complex prompts that need deep optimization with specific training data.
2. quick_prompt_improve - Fast Enhancement
GEPA-powered quick improvements - Single optimization cycle
{
"tool": "quick_prompt_improve",
"prompt": "Explain quantum computing",
"context": "For a high school student with basic physics knowledge",
"task_type": "educational"
}
When to use: Fast improvements when you don't have training data or need immediate results.
3. conversational_optimize - Context-Aware
Smart conversation-based optimization - Adapts to chat context
{
"tool": "conversational_optimize",
"prompt": "Help me debug this function",
"conversation_history": "User struggling with Python loops, prefers simple examples",
"user_satisfaction_signals": "Liked step-by-step explanations"
}
When to use: Mid-conversation prompt improvements based on what's working well.
🧬 How GEPA Works
The genetic-evolutionary approach:
- Population Creation - Generates prompt variations
- Fitness Testing - Evaluates against your training data
- Selection - Keeps the best-performing prompts
- Evolution - Creates new variations through crossover/mutation
- Convergence - Returns the optimized prompt
Unlike traditional methods, GEPA uses natural language reflection to understand what makes prompts effective, leading to more human-aligned improvements.
📖 Usage Examples
Research Paper Summarization
# In Claude Desktop:
Use optimize_prompt with:
- seed_prompt: "Summarize this research paper"
- training_examples: [{"input": "ML paper on transformers", "expected_keywords": ["key findings", "methodology", "implications", "technical accuracy"]}]
- budget: 12
Code Explanation
# In Claude Desktop:
Use quick_prompt_improve with:
- prompt: "Explain this code"
- context: "For junior developers learning React"
- task_type: "educational"
Conversation Tuning
# In Claude Desktop:
Use conversational_optimize with:
- prompt: "Help me solve this problem"
- conversation_history: "User prefers concrete examples, gets confused by abstract explanations"
🔧 Configuration
Environment Setup (.env)
# Required
GEMINI_API_KEY=your_api_key_here
# Optional Tuning
GEMINI_MODEL=gemini-1.5-flash # or gemini-1.5-pro for higher quality
TEMPERATURE=0.7 # 0.1-1.0, lower = more focused
DEFAULT_BUDGET=10 # Default optimization rollouts
Best Practices
Training Data Tips:
- Use 3-5 diverse, realistic examples
- Focus on specific, measurable keywords
- Include variety in scenarios and contexts
Budget Guidelines:
- Budget 5-8: Quick testing and basic improvements
- Budget 10-15: Standard optimization (recommended)
- Budget 20+: Deep optimization for critical prompts
🔍 Troubleshooting
Tools not showing in Claude Desktop?
# Check config file (varies by OS):
# macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
# Linux: ~/.config/claude-desktop/claude_desktop_config.json
# Restart Claude Desktop completely
API errors?
# Verify your .env file:
cat .env # Should show: GEMINI_API_KEY=your_actual_key
# Test API access:
curl -H "x-goog-api-key: YOUR_KEY" https://generativelanguage.googleapis.com/v1/models
Installation issues?
# Reinstall from scratch:
rm .env && ./install.sh
📊 Performance
- Quality: 10-20% better prompts on average
- Speed: 30-120 seconds for full optimization
- Efficiency: 35x fewer API calls vs traditional methods
- Success Rate: 95%+ meaningful improvements
🫂 References & Citations
- Thank you to the brilliant minds that actually did this research, and shared their work with everyone; @misc{agrawal2025gepareflectivepromptevolution, title={GEPA: Reflective Prompt Evolution Can Outperform Reinforcement Learning}, author={Lakshya A Agrawal and Shangyin Tan and Dilara Soylu and Noah Ziems and Rishi Khare and Krista Opsahl-Ong and Arnav Singhvi and Herumb Shandilya and Michael J Ryan and Meng Jiang and Christopher Potts and Koushik Sen and Alexandros G. Dimakis and Ion Stoica and Dan Klein and Matei Zaharia and Omar Khattab}, year={2025}, eprint={2507.19457}, archivePrefix={arXiv}, primaryClass={cs.CL}, url={https://arxiv.org/abs/2507.19457},
🤝 Contributing
We welcome contributions to the core GEPA implementation:
- Performance optimizations
- Bug fixes and stability improvements
- Documentation enhancements
- Testing and validation
Extended Features: Experimental tools are preserved in the extended-features branch for future development.
📄 License
MIT License - Free for commercial and personal use.
🔬 Research
Based on "Genetic-Evolutionary Prompt Architecture: Efficient Automatic Prompt Optimization" - Research demonstrating that natural language reflection provides richer optimization signals than traditional policy gradients [alone].
Built With:
- Model Context Protocol (MCP) - Claude Desktop integration
- Google Gemini AI - Optimization engine
- uv - Python package management
🎯 Ready to optimize your prompts with research-backed evolution?
Run ./install.sh and start using GEPA in Claude Desktop!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file iflow_mcp_developzir_gepa_mcp-0.1.0.tar.gz.
File metadata
- Download URL: iflow_mcp_developzir_gepa_mcp-0.1.0.tar.gz
- Upload date:
- Size: 78.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ba44c0eb176d51131275b7cac54950b256f2b3c1d7a2d81052ed948916f217d2
|
|
| MD5 |
b019dd9370ccb7a2d98df35bdb568ac6
|
|
| BLAKE2b-256 |
1e2bbe7330ef0dd1f4e85cd191a5caf0aceba4a7494205b5a95323ec820a634b
|
File details
Details for the file iflow_mcp_developzir_gepa_mcp-0.1.0-py3-none-any.whl.
File metadata
- Download URL: iflow_mcp_developzir_gepa_mcp-0.1.0-py3-none-any.whl
- Upload date:
- Size: 29.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.0 {"installer":{"name":"uv","version":"0.10.0","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Debian GNU/Linux","version":"13","id":"trixie","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
7dc9a8e1ad7f68a323e48748fbc22270861dd032064fce0bd2e8bc7ea6b0a7a0
|
|
| MD5 |
36f9efaeb19490b39c3a8758a4d42cde
|
|
| BLAKE2b-256 |
4e36bad622455e50f0daebedd070cf66b7336777fdbade5a8ca396644c87c7ed
|