An opinionated Python library for building MCP (Model Context Protocol) servers with presets, agents, and tools.
Project description
mcp_arena
mcp_arena is a production-ready Python library for building MCP (Model Context Protocol) servers with intelligent agent orchestration and domain-specific presets.
โจ Features
- ๐ Ready-to-use MCP servers for popular platforms (GitHub, Slack, Notion, AWS, etc.)
- ๐ค Intelligent agents with reflection, planning, and routing capabilities
- ๐ง Zero-configuration setup for common use cases
- ๐๏ธ Extensible architecture built on SOLID principles
- ๐ฆ Modular design - use only what you need
๐ Quick Start
Installation
# Core library
pip install mcp-arena
# With specific presets
pip install mcp-arena[github,slack,notion]
# All presets
pip install mcp-arena[all]
Basic Usage
from mcp_arena.presents.github import GithubMCPServer
# Zero-config GitHub MCP server
mcp_server = GithubMCPServer(token="your_github_token")
mcp_server.run()
Using Tools Directly
from mcp_arena.tools.github import GithubTools
from mcp_arena.presents.github import GithubMCPServer
# Create GitHub MCP server first
mcp_server = GithubMCPServer(token="your_token")
# Create tools wrapper
tool = GithubTools(server=mcp_server)
tools = tool.get_list_of_tools()
@mcp_server.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
# Add a dynamic greeting resource
@mcp_servevr.resource("greeting://{name}")
def get_greeting(name: str) -> str:
"""Get a personalized greeting"""
return f"Hello, {name}!"
@mcp_server.prompt()
def greet_user(name: str, style: str = "friendly") -> str:
"""Generate a greeting prompt"""
styles = {
"friendly": "Please write a warm, friendly greeting",
"formal": "Please write a formal, professional greeting",
"casual": "Please write a casual, relaxed greeting",
}
return f"{styles.get(style, styles['friendly'])} for someone named {name}."
Advance Documentation
from mcp.server.fastmcp import Icon
from mcp_arena.presents.github import GithubMCPServer
# Create an icon from a file path or URL
icon = Icon(
src="icon.png",
mimeType="image/png",
sizes="64x64"
)
# Add icons to server
mcp = GithubMCPServer(
"My Server",
website_url="https://example.com",
token="*******",
icons=[icon]
)
# Add icons to tools, resources, and prompts
@mcp.tool(icons=[icon])
def my_tool():
"""Tool with an icon."""
return "result"
@mcp.resource("demo://resource", icons=[icon])
def my_resource():
"""Resource with an icon."""
return "content"
With Agent Orchestration
from mcp_arena.presents.github import GithubMCPServer
from mcp_arena.agent.react_agent import ReactAgent
# Create MCP server
mcp_server = GithubMCPServer(token="your_token")
# Create agent separately
agent = ReactAgent(llm=None, memory_type="conversation")
# Run the server
mcp_server.run()
LangChain Integration
Using MCP Arena Wrapper
from mcp_arena.wrapper.langchain_wrapper import MCPLangChainWrapper
from mcp_arena.presents.github import GithubMCPServer
# Create MCP server
github_server = GithubMCPServer(token="your_token")
# Wrap with LangChain
wrapper = MCPLangChainWrapper(
servers={"github": github_server},
auto_start=True
)
# Connect and create agent
await wrapper.connect()
agent = wrapper.create_agent(
llm="gpt-4-turbo",
system_prompt="You are a GitHub assistant"
)
Direct langchain_mcp_adapters Usage
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agent
from mcp_arena.presents.github import GithubMCPServer
# Start GitHub MCP server in background
github_server = GithubMCPServer(token="your_token", transport="stdio")
github_server.run()
# Create client with multiple servers
client = MultiServerMCPClient(
{
"github": {
"transport": "stdio",
"command": "python",
"args": ["/path/to/github_server_script.py"],
},
"math": {
"transport": "http",
"url": "http://localhost:8001/mcp",
}
}
)
tools = await client.get_tools()
agent = create_agent(
"claude-sonnet-4-5-20250929",
tools
)
# Use the agent
github_response = await agent.ainvoke(
{"messages": [{"role": "user", "content": "List my GitHub repositories"}]}
)
math_response = await agent.ainvoke(
{"messages": [{"role": "user", "content": "what's (3 + 5) x 12?"}]}
)
๐ Available Presets
Development Platforms
- GitHub - Repositories, issues, PRs, workflows
- GitLab - Projects, CI/CD, issues
- Bitbucket - Repositories and pipelines
Data & Storage
- PostgreSQL - Database operations
- MongoDB - Document operations
- Redis - Cache and data structures
- VectorDB - Vector database operations
Communication
- Slack - Channels, messages, workflows
- WhatsApp - Messaging via Twilio API
- Gmail - Email management and sending
- Outlook - Microsoft 365 email and calendar
- Discord - Servers and channels
- Teams - Microsoft Teams integration
Productivity
- Notion - Databases, pages, blocks
- Confluence - Spaces and pages
- Jira - Projects, issues, workflows
Cloud Services
- AWS S3 - Storage operations
- Azure Blob - Azure storage
- Google Cloud Storage - GCP storage
System Operations
- Local Operations - File system and system ops
- Docker - Container management
- Kubernetes - Cluster operations
๐ค Agent Types
Reflection Agent
Self-improving agent that refines responses through iterative refinement.
from mcp_arena.agent.reflection_agent import ReflectionAgent
agent = ReflectionAgent(
llm=None,
memory_type="conversation"
)
ReAct Agent
Systematic reasoning and acting cycle for complex problem-solving.
from mcp_arena.agent.react_agent import ReactAgent
agent = ReactAgent(
llm=None,
memory_type="conversation"
)
Planning Agent
Goal decomposition and step-by-step execution for complex tasks.
from mcp_arena.agent.planning_agent import PlanningAgent
agent = PlanningAgent(
llm=None,
memory_type="conversation"
)
Router Agent
Dynamic agent selection based on task requirements.
from mcp_arena.agent.router import AgentRouter
router = AgentRouter()
# Add routing rules
router.add_route(
condition=lambda input_text: "github" in input_text.lower(),
agent_type="react",
config={"llm": your_llm}
)
router.add_route(
condition=lambda input_text: "reflect" in input_text.lower(),
agent_type="reflection",
config={"llm": your_llm}
)
๐ง Custom Tools
Extend any preset with custom tools:
from mcp_arena.presents.github import GithubMCPServer
from mcp_arena.tools.base import tool
@tool(description="Custom repository analyzer")
def analyze_repo(repo: str) -> str:
return f"Analysis for {repo}"
server = GithubMCPServer(
token="your_token",
extra_tools=[analyze_repo]
)
๐ค LangChain Integration
Integrate mcp_arena MCP servers with LangChain agents for powerful multi-service automation:
from langchain_openai import ChatOpenAI
from mcp_arena.wrapper.langchain_integration import AsyncMCPLangChainIntegration
# Initialize LLM
llm = ChatOpenAI(model="gpt-4")
# Create integration with automatic setup
async with AsyncMCPLangChainIntegration(llm) as integration:
# Add your MCP servers
integration.add_github_server(token="your_github_token")
integration.add_slack_server(bot_token="xoxb-your-slack-token")
integration.add_gmail_server(
credentials_path="path/to/credentials.json",
token_path="path/to/token.json"
)
# Use the unified agent
response = await integration.invoke(
"Check my latest GitHub commits and summarize important emails"
)
print(response)
Quick Setup Examples
GitHub Agent:
async with AsyncMCPLangChainIntegration(llm) as integration:
integration.add_github_server(token="your_token")
response = await integration.invoke("List my GitHub repositories")
Multi-Service Agent:
async with AsyncMCPLangChainIntegration(llm) as integration:
integration.add_github_server(token="github_token")
integration.add_slack_server(bot_token="slack_token")
response = await integration.invoke("Deploy latest code and notify in Slack")
Installation:
pip install langchain-openai langchain-mcp-adapters
pip install "mcp_arena[communication]"
๐ Full Documentation
๐๏ธ Custom MCP Server
Build from scratch for full control:
from mcp_arena.mcp.server import BaseMCPServer
from mcp_arena.tools.base import tool
@tool(description="Search internal docs")
def search_docs(query: str) -> str:
return f"Results for {query}"
class CustomMCPServer(BaseMCPServer):
def _register_tools(self):
self.add_tool(search_docs)
server = CustomMCPServer(
name="custom-server",
description="Custom MCP server"
)
server.run()
๐ Documentation
- Installation Guide - Detailed installation instructions for all presets and communication services
- MCP Servers Guide - Comprehensive guide to all 17 available MCP servers
- Agent Guide - Using and configuring intelligent agents
- Tools Guide - Tool development and integration
- LangChain Integration - Integrate MCP servers with LangChain agents
- Quick Start - Get started in minutes
- Tutorial - Step-by-step tutorial
Architecture
MCP Client
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ MCP Server โ โ Core Layer
โ - Protocol โ
โ - Auth โ
โ - Tool Registry โ
โโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ Agent System โ โ Intelligence Layer
โ - Reflection โ
โ - ReAct โ
โ - Planning โ
โ - Router โ
โโโโโโโโโโโโโโโโโโโ
โ
โผ
โโโโโโโโโโโโโโโโโโโ
โ Tool Ecosystem โ โ Execution Layer
โ - Presets โ
โ - Custom Tools โ
โ - Orchestration โ
โโโโโโโโโโโโโโโโโโโ
Installation Options
# Core only
pip install mcp-arena[core]
# Development platforms
pip install mcp-arena[github,gitlab,bitbucket]
# Data & storage
pip install mcp-arena[postgres,mongodb,redis,vectordb]
# Communication
pip install mcp-arena[slack,whatsapp,gmail,outlook]
# All communication services
pip install mcp-arena[communication]
# Productivity
pip install mcp-arena[notion,confluence,jira]
# Cloud services
pip install mcp-arena[aws,docker,kubernetes]
# System operations
pip install mcp-arena[local_operation]
# Agent framework
pip install mcp-arena[agents]
# All presets
pip install mcp-arena[all]
# Complete with dev tools
pip install mcp-arena[complete]
๐ค Contributing
We welcome contributions! Please see our Contributing Guide for details.
Development Setup
# Clone the repository
git clone https://github.com/SatyamSingh8306/mcp_arena.git
cd mcp_arena
# Install in development mode
pip install -e .[dev]
# Run tests
pytest
# Run linting
black .
isort .
mypy .
Priority Areas
- New preset implementations
- Agent pattern improvements
- Documentation and examples
- Bug fixes and performance
๐ Requirements
- Python 3.12+
- MCP client compatible with Model Context Protocol v1.0+
๐ License
This project is licensed under the MIT License - see the LICENSE file for details.
๐ Links
- Documentation - Complete documentation library
- Installation Guide - Installation instructions
- MCP Servers Guide - Server documentation
- LangChain Integration - LangChain integration guide
- Repository
- Issues
- PyPI
๐ง Status
Version: 0.2.1 (Production-ready)
โ Stable Features:
- MCP server base classes
- 17 production-ready presets
- 4 agent types
- Tool registration system
- SOLID architecture
- Communication services (Gmail, Outlook, Slack, WhatsApp)
๐ Evolving APIs:
- Agent interfaces may enhance based on feedback
- New preset additions
- Performance optimizations
๐ Production Ready:
- Comprehensive documentation
- Active development
- Community support
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mcp_arena-0.2.2.tar.gz.
File metadata
- Download URL: mcp_arena-0.2.2.tar.gz
- Upload date:
- Size: 160.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3cef7332f4ac1ea64245119283910b3d6bec490a3ea5e64cd8a331b42e42dd9f
|
|
| MD5 |
55a105bf3bcd888e04b9697aa68d1035
|
|
| BLAKE2b-256 |
c10e94bd11cc0443fd9d1156c8d1b537e73d535320024aa00a1d38a27e84d188
|
File details
Details for the file mcp_arena-0.2.2-py3-none-any.whl.
File metadata
- Download URL: mcp_arena-0.2.2-py3-none-any.whl
- Upload date:
- Size: 148.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.1
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2b57c316e2bb32568ff5331a8b32f9f89d2f9b1815c2fde90d67ff50e299c3d8
|
|
| MD5 |
2e47e52bf4b206eef3f071214583166e
|
|
| BLAKE2b-256 |
0f2e8073320c87fa18923f65a468746fa21dd6897f0829e6842c0ea656e21169
|