Skip to main content

MCP tooling for developing and optimizing multi-agent AI systems.

Project description

MCP Kit Python

MCP tooling for developing and optimizing multi-agent AI systems

Python License PyPI

A comprehensive toolkit for working with the Model Context Protocol (MCP), providing seamless integration between AI agents and various data sources, APIs, and services. Whether you're building, testing, or deploying multi-agent systems, MCP Kit simplifies the complexity of tool orchestration and provides powerful mocking capabilities for development.

Features

Flexible Target System

  • MCP Servers: Connect to existing MCP servers (hosted or from specifications)
  • OpenAPI Integration: Automatically convert REST APIs to MCP tools using OpenAPI/Swagger specs
  • Mock Responses: Generate realistic test data using LLM or random generators
  • Multiplexing: Combine multiple targets into a unified interface

Framework Adapters

  • OpenAI Agents SDK: Native integration with OpenAI's agent framework
  • LangGraph: Seamless tool integration for LangGraph workflows
  • Generic Client Sessions: Direct MCP protocol communication
  • Official MCP Server: Standard MCP server wrapper

Configuration-Driven Architecture

  • YAML/JSON Configuration: Declarative setup for complex workflows
  • Factory Pattern: Clean, testable component creation
  • Environment Variables: Secure credential management

Advanced Response Generation

  • LLM-Powered Mocking: Generate contextually appropriate responses using LLMs
  • Random Data Generation: Create test data for development and testing
  • Custom Generators: Implement your own response generation logic

Quick Start

Installation

uv add mcp-kit

Basic Usage

First you write the Proxy config:

# proxy_config.yaml
""" A mocked REST API target given the OpenAPI spec using LLM-generated responses
"""
target:
  type: mocked
  base_target:
    type: oas
    name: base-oas-server
    spec_url: https://petstore3.swagger.io/api/v3/openapi.json
  tool_response_generator:
    type: llm
    model: openai/gpt-4.1-nano

Don't forget to setup the LLM API KEY:

# .env
OPENAI_API_KEY="your_openai_key"

Then we can use it as any other MCP:

# main.py
from mcp_kit import ProxyMCP


async def main():
    # Create proxy from configuration
    proxy = ProxyMCP.from_config("proxy_config.yaml")

    # Use with MCP client session adapter
    async with proxy.client_session_adapter() as session:
        tools = await session.list_tools()
        result = await session.call_tool("get_pet", {"pet_id": "777"})
        print(result)


if __name__ == "__main__":
    import asyncio

    asyncio.run(main())

Core Concepts

Targets

Targets are the core abstraction in MCP Kit, representing different types of tool providers:

MCP Target

Connect to existing MCP servers:

target:
  type: mcp
  name: my-mcp-server
  url: http://localhost:8080/mcp
  headers:
    Authorization: Bearer token123

OpenAPI Target

Convert REST APIs to MCP tools:

target:
  type: oas
  name: petstore-api
  spec_url: https://petstore3.swagger.io/api/v3/openapi.json

Mocked Target

Generate fake responses for testing:

target:
  type: mocked
  base_target:
    type: mcp
    name: test-server
    url: http://localhost:9000/mcp
  tool_response_generator:
    type: llm
    model: openai/gpt-4.1-nano

Multiplex Target

Combine multiple targets:

target:
  type: multiplex
  name: combined-services
  targets:
    - type: mcp
      name: weather-service
      url: http://localhost:8080/mcp
    - type: oas
      name: petstore
      spec_url: https://petstore3.swagger.io/api/v3/openapi.json

Adapters

Adapters provide framework-specific interfaces for your targets:

  • Client Session Adapter: Direct MCP protocol communication
  • OpenAI Agents Adapter: Integration with OpenAI's agent framework
  • LangGraph Adapter: Tools for LangGraph workflows
  • Official MCP Server: Standard MCP server wrapper

Response Generators

Response Generators create mock responses for testing and development:

  • LLM Generator: Uses language models to generate contextually appropriate responses
  • Random Generator: Creates random test data
  • Custom Generators: Implement your own logic

Examples

OpenAI Agents SDK Integration

from mcp_kit import ProxyMCP
from agents import Agent, Runner, trace
import asyncio

async def openai_example():
    proxy = ProxyMCP.from_config("proxy_config.yaml")

    async with proxy.openai_agents_mcp_server() as mcp_server:
        # Use with OpenAI Agents SDK
        agent = Agent(
            name="research_agent",
            instructions="You are a research assistant with access to various tools.",
            model="gpt-4.1-nano",
            mcp_servers=[mcp_server]
        )

        response = await Runner.run(
            agent,
            "What's the weather like in San Francisco?"
        )
        print(response.final_output)

if __name__ == "__main__":
    asyncio.run(openai_example())

LangGraph Workflow Integration

from mcp_kit import ProxyMCP
from langgraph.prebuilt import create_react_agent
import asyncio

async def langgraph_example():
    proxy = ProxyMCP.from_config("proxy_config.yaml")

    # Get LangChain-compatible tools
    client = proxy.langgraph_multi_server_mcp_client()

    async with client.session("your_server_name") as _:
        # Get the MCP tools as LangChain tools
        tools = await client.get_tools(server_name="your_server_name")

        # Create ReAct agent
        agent = create_react_agent(model="google_genai:gemini-2.0-flash", tools=tools)

        # Run workflow
        response = await agent.ainvoke({
            "messages": [{"role": "user", "content": "Analyze Q1 expenses"}]
        })

        # Extract result
        final_message = response["messages"][-1]
        print(final_message.content)

if __name__ == "__main__":
    asyncio.run(langgraph_example())

Testing with Mocked Responses

from mcp_kit import ProxyMCP
import asyncio

async def testing_example():
    # Configuration with LLM-powered mocking
    proxy = ProxyMCP.from_config("proxy_config.yaml")

    async with proxy.client_session_adapter() as session:
        # These calls will return realistic mock data
        tools = await session.list_tools()
        expenses = await session.call_tool("get_expenses", {"period": "Q1"})
        revenues = await session.call_tool("get_revenues", {"period": "Q1"})

        print(f"Available tools: {[tool.name for tool in tools.tools]}")
        print(f"Mock expenses: {expenses}")
        print(f"Mock revenues: {revenues}")

if __name__ == "__main__":
    asyncio.run(testing_example())

Configuration Examples

Development with Mocking

# dev_proxy_config.yaml
target:
  type: mocked
  base_target:
    type: oas
    name: accounting-api
    spec_url: https://api.company.com/accounting/openapi.json
  tool_response_generator:
    type: llm
    model: openai/gpt-4.1-nano

Production MCP Server

# prod_proxy_config.yaml
target:
  type: mcp
  name: production-accounting
  url: https://mcp.company.com/accounting
  headers:
    Authorization: Bearer ${PROD_API_KEY}
    X-Client-Version: "1.0.0"

Multi-Service Architecture

# multi_proxy_config.yaml
target:
  type: multiplex
  name: enterprise-tools
  targets:
    - type: mcp
      name: crm-service
      url: https://mcp.company.com/crm
    - type: oas
      name: analytics-api
      spec_url: https://api.company.com/analytics/openapi.json
    - type: mocked
      base_target:
        type: mcp
        name: experimental-service
        url: https://beta.company.com/mcp
      tool_response_generator:
        type: random

Advanced Configuration

Environment Variables

# .env file
OPENAI_API_KEY=your-openai-key
WEATHER_API_KEY=your-weather-key

Custom Response Generators

from mcp_kit.generators import ToolResponseGenerator
from mcp_kit import ProxyMCP

class CustomGenerator(ToolResponseGenerator):
    async def generate(self, target_name: str, tool: Tool, arguments: dict[str, Any] | None = None) -> list[Content]:
        # Your custom logic here
        return [TextContent(type="text", text=f"Custom response for {tool.name} on {target_name}")]

# Use in configuration
proxy = ProxyMCP(
    target=MockedTarget(
        base_target=McpTarget("test", "http://localhost:8080"),
        mock_config=MockConfig(tool_response_generator=CustomGenerator())
    )
)

Project Structure

mcp-kit-python/
├── src/mcp_kit/
│   ├── adapters/          # Framework adapters
│   │   ├── client_session.py
│   │   ├── openai.py
│   │   └── langgraph.py
│   ├── generators/        # Response generators
│   │   ├── llm.py
│   │   └── random.py
│   ├── targets/          # Target implementations
│   │   ├── mcp.py
│   │   ├── oas.py
│   │   ├── mocked.py
│   │   └── multiplex.py
│   ├── factory.py        # Factory pattern implementation
│   └── proxy.py          # Main ProxyMCP class
├── examples/             # Usage examples
│   ├── openai_agents_sdk/
│   ├── langgraph/
│   ├── mcp_client_session/
│   └── proxy_configs/
└── tests/               # Test suite

Contributing

We welcome contributions! Please see our Contributing Guide for details.

Development Setup

git clone https://github.com/agentiqs/mcp-kit-python.git
cd mcp-kit-python
uv sync --dev
pre-commit install

Running Tests

uv run pytest tests/ -v

Documentation


License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


Support


Built with ❤️ by Agentiqs

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcp_kit-0.2.2.tar.gz (1.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcp_kit-0.2.2-py3-none-any.whl (31.4 kB view details)

Uploaded Python 3

File details

Details for the file mcp_kit-0.2.2.tar.gz.

File metadata

  • Download URL: mcp_kit-0.2.2.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for mcp_kit-0.2.2.tar.gz
Algorithm Hash digest
SHA256 330e49da03e788298be8ba09f97b4467b918a0162a7b7d0b7751579183e52751
MD5 ed974fc2eb25d887379e557857100f75
BLAKE2b-256 a9a11a456eca153e0941e218f32585cbd9c76d50399ca5a13078fc030c0b782e

See more details on using hashes here.

Provenance

The following attestation bundles were made for mcp_kit-0.2.2.tar.gz:

Publisher: release.yml on agentiqs/mcp-kit-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file mcp_kit-0.2.2-py3-none-any.whl.

File metadata

  • Download URL: mcp_kit-0.2.2-py3-none-any.whl
  • Upload date:
  • Size: 31.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for mcp_kit-0.2.2-py3-none-any.whl
Algorithm Hash digest
SHA256 be9716cb3ede984c4d6d606251d58c403349c0985ed8d885d6b144ebf8e9bfcb
MD5 48b33cbe41955cb01292d3dd9fa7ef73
BLAKE2b-256 f84f03ff8490e2fc1a9f0d2dec75eb9335911fb2cf7e39009afdee8d2912cf92

See more details on using hashes here.

Provenance

The following attestation bundles were made for mcp_kit-0.2.2-py3-none-any.whl:

Publisher: release.yml on agentiqs/mcp-kit-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page