MCP tooling for developing and optimizing multi-agent AI systems.
Project description
MCP Kit Python
MCP tooling for developing and optimizing multi-agent AI systems
A comprehensive toolkit for working with the Model Context Protocol (MCP), providing seamless integration between AI agents and various data sources, APIs, and services. Whether you're building, testing, or deploying multi-agent systems, MCP Kit simplifies the complexity of tool orchestration and provides powerful mocking capabilities for development.
Features
Flexible Target System
- MCP Servers: Connect to existing MCP servers (hosted or from specifications)
- OpenAPI Integration: Automatically convert REST APIs to MCP tools using OpenAPI/Swagger specs
- Mock Responses: Generate realistic test data using LLM or random generators
- Multiplexing: Combine multiple targets into a unified interface
Framework Adapters
- OpenAI Agents SDK: Native integration with OpenAI's agent framework
- LangGraph: Seamless tool integration for LangGraph workflows
- Generic Client Sessions: Direct MCP protocol communication
- Official MCP Server: Standard MCP server wrapper
Configuration-Driven Architecture
- YAML/JSON Configuration: Declarative setup for complex workflows
- Factory Pattern: Clean, testable component creation
- Environment Variables: Secure credential management
Advanced Response Generation
- LLM-Powered Mocking: Generate contextually appropriate responses using LLMs
- Random Data Generation: Create test data for development and testing
- Custom Generators: Implement your own response generation logic
Quick Start
Installation
uv add mcp-kit
Basic Usage
First you write the Proxy config:
# proxy_config.yaml
""" A mocked REST API target given the OpenAPI spec using LLM-generated responses
"""
target:
type: mocked
base_target:
type: oas
name: base-oas-server
spec_url: https://petstore3.swagger.io/api/v3/openapi.json
response_generator:
type: llm
model: openai/gpt-4.1-nano
Don't forget to setup the LLM API KEY:
# .env
OPENAI_API_KEY="your_openai_key"
Then we can use it as any other MCP:
# main.py
from mcp_kit import ProxyMCP
async def main():
# Create proxy from configuration
proxy = ProxyMCP.from_config("proxy_config.yaml")
# Use with MCP client session adapter
async with proxy.client_session_adapter() as session:
tools = await session.list_tools()
result = await session.call_tool("get_pet", {"pet_id": "777"})
print(result)
if __name__ == "__main__":
import asyncio
asyncio.run(main())
Core Concepts
Targets
Targets are the core abstraction in MCP Kit, representing different types of tool providers:
MCP Target
Connect to existing MCP servers:
target:
type: mcp
name: my-mcp-server
url: http://localhost:8080/mcp
headers:
Authorization: Bearer token123
OpenAPI Target
Convert REST APIs to MCP tools:
target:
type: oas
name: petstore-api
spec_url: https://petstore3.swagger.io/api/v3/openapi.json
Mocked Target
Generate fake responses for testing:
target:
type: mocked
base_target:
type: mcp
name: test-server
url: http://localhost:9000/mcp
response_generator:
type: llm
model: openai/gpt-4.1-nano
Multiplex Target
Combine multiple targets:
target:
type: multiplex
name: combined-services
targets:
- type: mcp
name: weather-service
url: http://localhost:8080/mcp
- type: oas
name: petstore
spec_url: https://petstore3.swagger.io/api/v3/openapi.json
Adapters
Adapters provide framework-specific interfaces for your targets:
- Client Session Adapter: Direct MCP protocol communication
- OpenAI Agents Adapter: Integration with OpenAI's agent framework
- LangGraph Adapter: Tools for LangGraph workflows
- Official MCP Server: Standard MCP server wrapper
Response Generators
Response Generators create mock responses for testing and development:
- LLM Generator: Uses language models to generate contextually appropriate responses
- Random Generator: Creates random test data
- Custom Generators: Implement your own logic
Examples
OpenAI Agents SDK Integration
from mcp_kit import ProxyMCP
from agents import Agent, Runner, trace
import asyncio
async def openai_example():
proxy = ProxyMCP.from_config("proxy_config.yaml")
async with proxy.openai_agents_mcp_server() as mcp_server:
# Use with OpenAI Agents SDK
agent = Agent(
name="research_agent",
instructions="You are a research assistant with access to various tools.",
model="gpt-4.1-nano",
mcp_servers=[mcp_server]
)
response = await Runner.run(
agent,
"What's the weather like in San Francisco?"
)
print(response.final_output)
if __name__ == "__main__":
asyncio.run(openai_example())
LangGraph Workflow Integration
from mcp_kit import ProxyMCP
from langgraph.prebuilt import create_react_agent
import asyncio
async def langgraph_example():
proxy = ProxyMCP.from_config("proxy_config.yaml")
# Get LangChain-compatible tools
client = proxy.langgraph_multi_server_mcp_client()
async with client.session("your_server_name") as _:
# Get the MCP tools as LangChain tools
tools = await client.get_tools(server_name="your_server_name")
# Create ReAct agent
agent = create_react_agent(model="google_genai:gemini-2.0-flash", tools=tools)
# Run workflow
response = await agent.ainvoke({
"messages": [{"role": "user", "content": "Analyze Q1 expenses"}]
})
# Extract result
final_message = response["messages"][-1]
print(final_message.content)
if __name__ == "__main__":
asyncio.run(langgraph_example())
Testing with Mocked Responses
from mcp_kit import ProxyMCP
import asyncio
async def testing_example():
# Configuration with LLM-powered mocking
proxy = ProxyMCP.from_config("proxy_config.yaml")
async with proxy.client_session_adapter() as session:
# These calls will return realistic mock data
tools = await session.list_tools()
expenses = await session.call_tool("get_expenses", {"period": "Q1"})
revenues = await session.call_tool("get_revenues", {"period": "Q1"})
print(f"Available tools: {[tool.name for tool in tools.tools]}")
print(f"Mock expenses: {expenses}")
print(f"Mock revenues: {revenues}")
if __name__ == "__main__":
asyncio.run(testing_example())
Configuration Examples
Development with Mocking
# dev_proxy_config.yaml
target:
type: mocked
base_target:
type: oas
name: accounting-api
spec_url: https://api.company.com/accounting/openapi.json
response_generator:
type: llm
model: openai/gpt-4.1-nano
Production MCP Server
# prod_proxy_config.yaml
target:
type: mcp
name: production-accounting
url: https://mcp.company.com/accounting
headers:
Authorization: Bearer ${PROD_API_KEY}
X-Client-Version: "1.0.0"
Multi-Service Architecture
# multi_proxy_config.yaml
target:
type: multiplex
name: enterprise-tools
targets:
- type: mcp
name: crm-service
url: https://mcp.company.com/crm
- type: oas
name: analytics-api
spec_url: https://api.company.com/analytics/openapi.json
- type: mocked
base_target:
type: mcp
name: experimental-service
url: https://beta.company.com/mcp
response_generator:
type: random
Advanced Configuration
Environment Variables
# .env file
OPENAI_API_KEY=your-openai-key
WEATHER_API_KEY=your-weather-key
Custom Response Generators
from mcp_kit.generators import ResponseGenerator
from mcp_kit import ProxyMCP
class CustomGenerator(ResponseGenerator):
async def generate(self, target_name: str, tool: Tool, arguments: dict[str, Any] | None = None) -> list[Content]:
# Your custom logic here
return [TextContent(type="text", text=f"Custom response for {tool.name} on {target_name}")]
# Use in configuration
proxy = ProxyMCP(
target=MockedTarget(
base_target=McpTarget("test", "http://localhost:8080"),
mock_config=MockConfig(response_generator=CustomGenerator())
)
)
Project Structure
mcp-kit-python/
├── src/mcp_kit/
│ ├── adapters/ # Framework adapters
│ │ ├── client_session.py
│ │ ├── openai.py
│ │ └── langgraph.py
│ ├── generators/ # Response generators
│ │ ├── llm.py
│ │ └── random.py
│ ├── targets/ # Target implementations
│ │ ├── mcp.py
│ │ ├── oas.py
│ │ ├── mocked.py
│ │ └── multiplex.py
│ ├── factory.py # Factory pattern implementation
│ └── proxy.py # Main ProxyMCP class
├── examples/ # Usage examples
│ ├── openai_agents_sdk/
│ ├── langgraph/
│ ├── mcp_client_session/
│ └── proxy_configs/
└── tests/ # Test suite
Contributing
We welcome contributions! Please see our Contributing Guide for details.
Development Setup
git clone https://github.com/agentiqs/mcp-kit-python.git
cd mcp-kit-python
uv sync --dev
pre-commit install
Running Tests
uv run pytest tests/ -v
Documentation
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Support
Built with ❤️ by Agentiqs
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mcp_kit-0.1.2.tar.gz.
File metadata
- Download URL: mcp_kit-0.1.2.tar.gz
- Upload date:
- Size: 1.1 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b1d223c28a185d2ece112e5cb99abe356897455ff3e0e3627fd3ee7325b14245
|
|
| MD5 |
01030842f657157bb6c8bfb22fd7cfdf
|
|
| BLAKE2b-256 |
928ac423da47b3c96c2b31f6ad55d9c4c1f21fac447b424d985923716099ead4
|
File details
Details for the file mcp_kit-0.1.2-py3-none-any.whl.
File metadata
- Download URL: mcp_kit-0.1.2-py3-none-any.whl
- Upload date:
- Size: 24.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.7.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d704e4dbdfe02d999f4f508e88f280f1b8596859db2e72b8c1a494f51861e10d
|
|
| MD5 |
9e7de226760b19a727413326e38f30fa
|
|
| BLAKE2b-256 |
bf588f02f99faab51bda97f93ec6469c8e1bb2b3fbcabf10b8b4896b4c1882c5
|