Skip to main content

MCP tooling for developing and optimizing multi-agent AI systems.

Project description

MCP Kit Python

MCP tooling for developing and optimizing multi-agent AI systems

Python License PyPI

A comprehensive toolkit for working with the Model Context Protocol (MCP), providing seamless integration between AI agents and various data sources, APIs, and services. Whether you're building, testing, or deploying multi-agent systems, MCP Kit simplifies the complexity of tool orchestration and provides powerful mocking capabilities for development.

Features

Flexible Target System

  • MCP Servers: Connect to existing MCP servers (hosted or from specifications)
  • OpenAPI Integration: Automatically convert REST APIs to MCP tools using OpenAPI/Swagger specs
  • Mock Responses: Generate realistic test data using LLM or random generators
  • Multiplexing: Combine multiple targets into a unified interface

Framework Adapters

  • OpenAI Agents SDK: Native integration with OpenAI's agent framework
  • LangGraph: Seamless tool integration for LangGraph workflows
  • Generic Client Sessions: Direct MCP protocol communication
  • Official MCP Server: Standard MCP server wrapper

Configuration-Driven Architecture

  • YAML/JSON Configuration: Declarative setup for complex workflows
  • Factory Pattern: Clean, testable component creation
  • Environment Variables: Secure credential management

Advanced Response Generation

  • LLM-Powered Mocking: Generate contextually appropriate responses using LLMs
  • Random Data Generation: Create test data for development and testing
  • Custom Generators: Implement your own response generation logic

Quick Start

Installation

uv add mcp-kit

Basic Usage

First you write the Proxy config:

# proxy_config.yaml
""" A mocked REST API target given the OpenAPI spec using LLM-generated responses
"""
target:
  type: mocked
  base_target:
    type: oas
    name: base-oas-server
    spec_url: https://petstore3.swagger.io/api/v3/openapi.json
  response_generator:
    type: llm
    model: openai/gpt-4.1-nano

Don't forget to setup the LLM API KEY:

# .env
OPENAI_API_KEY="your_openai_key"

Then we can use it as any other MCP:

# main.py
from mcp_kit import ProxyMCP


async def main():
    # Create proxy from configuration
    proxy = ProxyMCP.from_config("proxy_config.yaml")

    # Use with MCP client session adapter
    async with proxy.client_session_adapter() as session:
        tools = await session.list_tools()
        result = await session.call_tool("get_pet", {"pet_id": "777"})
        print(result)


if __name__ == "__main__":
    import asyncio

    asyncio.run(main())

Core Concepts

Targets

Targets are the core abstraction in MCP Kit, representing different types of tool providers:

MCP Target

Connect to existing MCP servers:

target:
  type: mcp
  name: my-mcp-server
  url: http://localhost:8080/mcp
  headers:
    Authorization: Bearer token123

OpenAPI Target

Convert REST APIs to MCP tools:

target:
  type: oas
  name: petstore-api
  spec_url: https://petstore3.swagger.io/api/v3/openapi.json

Mocked Target

Generate fake responses for testing:

target:
  type: mocked
  base_target:
    type: mcp
    name: test-server
    url: http://localhost:9000/mcp
  response_generator:
    type: llm
    model: openai/gpt-4.1-nano

Multiplex Target

Combine multiple targets:

target:
  type: multiplex
  name: combined-services
  targets:
    - type: mcp
      name: weather-service
      url: http://localhost:8080/mcp
    - type: oas
      name: petstore
      spec_url: https://petstore3.swagger.io/api/v3/openapi.json

Adapters

Adapters provide framework-specific interfaces for your targets:

  • Client Session Adapter: Direct MCP protocol communication
  • OpenAI Agents Adapter: Integration with OpenAI's agent framework
  • LangGraph Adapter: Tools for LangGraph workflows
  • Official MCP Server: Standard MCP server wrapper

Response Generators

Response Generators create mock responses for testing and development:

  • LLM Generator: Uses language models to generate contextually appropriate responses
  • Random Generator: Creates random test data
  • Custom Generators: Implement your own logic

Examples

OpenAI Agents SDK Integration

from mcp_kit import ProxyMCP
from agents import Agent, Runner, trace
import asyncio

async def openai_example():
    proxy = ProxyMCP.from_config("proxy_config.yaml")

    async with proxy.openai_agents_mcp_server() as mcp_server:
        # Use with OpenAI Agents SDK
        agent = Agent(
            name="research_agent",
            instructions="You are a research assistant with access to various tools.",
            model="gpt-4.1-nano",
            mcp_servers=[mcp_server]
        )

        response = await Runner.run(
            agent,
            "What's the weather like in San Francisco?"
        )
        print(response.final_output)

if __name__ == "__main__":
    asyncio.run(openai_example())

LangGraph Workflow Integration

from mcp_kit import ProxyMCP
from langgraph.prebuilt import create_react_agent
import asyncio

async def langgraph_example():
    proxy = ProxyMCP.from_config("proxy_config.yaml")

    # Get LangChain-compatible tools
    client = proxy.langgraph_multi_server_mcp_client()

    async with client.session("your_server_name") as _:
        # Get the MCP tools as LangChain tools
        tools = await client.get_tools(server_name="your_server_name")

        # Create ReAct agent
        agent = create_react_agent(model="google_genai:gemini-2.0-flash", tools=tools)

        # Run workflow
        response = await agent.ainvoke({
            "messages": [{"role": "user", "content": "Analyze Q1 expenses"}]
        })

        # Extract result
        final_message = response["messages"][-1]
        print(final_message.content)

if __name__ == "__main__":
    asyncio.run(langgraph_example())

Testing with Mocked Responses

from mcp_kit import ProxyMCP
import asyncio

async def testing_example():
    # Configuration with LLM-powered mocking
    proxy = ProxyMCP.from_config("proxy_config.yaml")

    async with proxy.client_session_adapter() as session:
        # These calls will return realistic mock data
        tools = await session.list_tools()
        expenses = await session.call_tool("get_expenses", {"period": "Q1"})
        revenues = await session.call_tool("get_revenues", {"period": "Q1"})

        print(f"Available tools: {[tool.name for tool in tools.tools]}")
        print(f"Mock expenses: {expenses}")
        print(f"Mock revenues: {revenues}")

if __name__ == "__main__":
    asyncio.run(testing_example())

Configuration Examples

Development with Mocking

# dev_proxy_config.yaml
target:
  type: mocked
  base_target:
    type: oas
    name: accounting-api
    spec_url: https://api.company.com/accounting/openapi.json
  response_generator:
    type: llm
    model: openai/gpt-4.1-nano

Production MCP Server

# prod_proxy_config.yaml
target:
  type: mcp
  name: production-accounting
  url: https://mcp.company.com/accounting
  headers:
    Authorization: Bearer ${PROD_API_KEY}
    X-Client-Version: "1.0.0"

Multi-Service Architecture

# multi_proxy_config.yaml
target:
  type: multiplex
  name: enterprise-tools
  targets:
    - type: mcp
      name: crm-service
      url: https://mcp.company.com/crm
    - type: oas
      name: analytics-api
      spec_url: https://api.company.com/analytics/openapi.json
    - type: mocked
      base_target:
        type: mcp
        name: experimental-service
        url: https://beta.company.com/mcp
      response_generator:
        type: random

Advanced Configuration

Environment Variables

# .env file
OPENAI_API_KEY=your-openai-key
WEATHER_API_KEY=your-weather-key

Custom Response Generators

from mcp_kit.generators import ResponseGenerator
from mcp_kit import ProxyMCP

class CustomGenerator(ResponseGenerator):
    async def generate(self, target_name: str, tool: Tool, arguments: dict[str, Any] | None = None) -> list[Content]:
        # Your custom logic here
        return [TextContent(type="text", text=f"Custom response for {tool.name} on {target_name}")]

# Use in configuration
proxy = ProxyMCP(
    target=MockedTarget(
        base_target=McpTarget("test", "http://localhost:8080"),
        mock_config=MockConfig(response_generator=CustomGenerator())
    )
)

Project Structure

mcp-kit-python/
├── src/mcp_kit/
│   ├── adapters/          # Framework adapters
│   │   ├── client_session.py
│   │   ├── openai.py
│   │   └── langgraph.py
│   ├── generators/        # Response generators
│   │   ├── llm.py
│   │   └── random.py
│   ├── targets/          # Target implementations
│   │   ├── mcp.py
│   │   ├── oas.py
│   │   ├── mocked.py
│   │   └── multiplex.py
│   ├── factory.py        # Factory pattern implementation
│   └── proxy.py          # Main ProxyMCP class
├── examples/             # Usage examples
│   ├── openai_agents_sdk/
│   ├── langgraph/
│   ├── mcp_client_session/
│   └── proxy_configs/
└── tests/               # Test suite

Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Setup

git clone https://github.com/agentiqs/mcp-kit-python.git
cd mcp-kit-python
uv sync --dev
pre-commit install

Running Tests

uv run pytest tests/ -v

Documentation


License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


Support


Built with ❤️ by Agentiqs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_kit-0.1.0.tar.gz (1.1 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcp_kit-0.1.0-py3-none-any.whl (28.7 kB view details)

Uploaded Python 3

File details

Details for the file mcp_kit-0.1.0.tar.gz.

File metadata

  • Download URL: mcp_kit-0.1.0.tar.gz
  • Upload date:
  • Size: 1.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.13

File hashes

Hashes for mcp_kit-0.1.0.tar.gz
Algorithm Hash digest
SHA256 bdd30395a871b1bd6ac94904402842d1d9667b97a637b2fe6a8cf07ae655b61b
MD5 977132d2665b30f472b5b75670d9a595
BLAKE2b-256 348d842d690839dafed9c51a8ff9fb6bc09ede08243d0c99a9282f475d39003e

See more details on using hashes here.

File details

Details for the file mcp_kit-0.1.0-py3-none-any.whl.

File metadata

  • Download URL: mcp_kit-0.1.0-py3-none-any.whl
  • Upload date:
  • Size: 28.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.7.13

File hashes

Hashes for mcp_kit-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 3bce7e38855f1d3a59527dd9d4d9e2318a5448c12fc07601ca419c3497653ab6
MD5 24e8d83a52da0f54bf0e3829bf958014
BLAKE2b-256 0d7bdda7c5a994aa07f9876d8f543e6793f328babb9e3e51115b163a2b7d98f3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page