Skip to main content

Library to use MCP servers natively with AI clients as tools.

Project description

MCPHero - MCP as tools / MCP as functions

Library to use MCP as tools / functions in native AI libraries

Inspiration

Everyone uses MCP now, but many still use old-school AI clients with no MCP support. These client libraries like openai or google-genai only have tool/function calls support. This project is created to easily connect MCP servers to these libs as tools.

Concept

Two main flows:

  1. list_tools - call the MCP server over http to get the tool definitions, then map them to AI library tool definitions
  2. process_tool_calls - get the AI library's tool_calls, parse them, send the requests to mcp servers, return results

Installation

Base (no LLM SDK dependency):

pip install mcphero

For OpenAI support:

pip install "mcphero[openai]"

For Google Gemini support:

pip install "mcphero[google-genai]"

Quick Start

Generic (provider-agnostic)

Use MCPToolAdapter when your framework has its own tool-call loop, or when you just need raw MCP tool execution without any LLM SDK dependency.

import asyncio
from mcphero import MCPToolAdapter, GenericToolCall

async def main():
    adapter = MCPToolAdapter("https://api.mcphero.app/mcp/your-server-id")

    # Discover available tools
    tools = await adapter.discover_tools()
    for tool in tools:
        print(tool.name, tool.description)

    # Execute tool calls directly
    results = await adapter.process_tool_calls([
        GenericToolCall(name="get_weather", arguments={"city": "London"}, id="1"),
    ])
    for result in results:
        print(result.content)

asyncio.run(main())

Or call a single tool directly:

result = await adapter.call_tool("get_weather", {"city": "London"})

OpenAI

import asyncio
from openai import OpenAI
from mcphero import MCPToolAdapterOpenAI

async def main():
    adapter = MCPToolAdapterOpenAI("https://api.mcphero.app/mcp/your-server-id")
    client = OpenAI()

    # Get tool definitions
    tools = await adapter.get_tool_definitions()

    # Make request with tools
    messages = [{"role": "user", "content": "What's the weather in London?"}]
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        tools=tools,
    )

    # Process tool calls if present
    if response.choices[0].message.tool_calls:
        tool_results = await adapter.process_tool_calls(
            response.choices[0].message.tool_calls
        )

        # Continue conversation with results
        messages.append(response.choices[0].message)
        messages.extend(tool_results)

        final_response = client.chat.completions.create(
            model="gpt-4o",
            messages=messages,
            tools=tools,
        )
        print(final_response.choices[0].message.content)

asyncio.run(main())

Google Gemini

import asyncio
from google import genai
from google.genai import types
from mcphero import MCPToolAdapterGemini

async def main():
    adapter = MCPToolAdapterGemini("https://api.mcphero.app/mcp/your-server-id")
    client = genai.Client(api_key="your-api-key")

    # Get tool definitions
    tool = await adapter.get_tool()

    # Make request with tools
    response = client.models.generate_content(
        model="gemini-2.5-flash",
        contents="What's the weather in London?",
        config=types.GenerateContentConfig(
            tools=[tool],
            automatic_function_calling=types.AutomaticFunctionCallingConfig(
                disable=True
            ),
        ),
    )

    # Process function calls if present
    if response.function_calls:
        results = await adapter.process_function_calls(response.function_calls)

        # Continue conversation with results
        contents = [
            types.Content(role="user", parts=[types.Part.from_text("What's the weather in London?")]),
            response.candidates[0].content,
            *results,
        ]

        final_response = client.models.generate_content(
            model="gemini-2.5-flash",
            contents=contents,
            config=types.GenerateContentConfig(tools=[tool]),
        )
        print(final_response.text)

asyncio.run(main())

Multiple MCP Servers

Adapters natively support connecting to multiple MCP servers at once. Use MCPServerConfig to configure each server, then pass them as a list.

MCPServerConfig

from mcphero import MCPServerConfig

config = MCPServerConfig(
    url="https://api.mcphero.app/mcp/your-server-id",  # required
    name="weather",              # optional, auto-derived from URL if omitted
    timeout=30.0,                # optional, default 30s
    headers={                    # optional, auth headers for the server
        "Authorization": "Bearer your-token",
    },
    init_mode="auto",            # "auto" | "on_fail" | "none"
    tool_prefix="wx",            # optional, prefix for tool names from this server
)
Field Type Default Description
url str required HTTP endpoint of the MCP server
name str | None derived from URL Identifier for the server (e.g. last path segment)
timeout float 30.0 Request timeout in seconds
headers dict[str, str] | None None Headers sent with every request (useful for auth)
init_mode "auto" | "on_fail" | "none" "auto" When to run MCP initialization handshake
tool_prefix str | None None Prefix applied to all tool names from this server

init_mode options:

  • "auto" - initialize the connection before every request (default, safest)
  • "on_fail" - skip initialization, but retry with initialization if a request fails
  • "none" - never initialize (for servers that don't require it)

Multi-Server Example

import asyncio
from openai import OpenAI
from mcphero import MCPToolAdapterOpenAI, MCPServerConfig

async def main():
    adapter = MCPToolAdapterOpenAI([
        MCPServerConfig(
            url="https://api.mcphero.app/mcp/weather",
            name="weather",
            headers={"Authorization": "Bearer weather-token"},
        ),
        MCPServerConfig(
            url="https://api.mcphero.app/mcp/calendar",
            name="calendar",
            headers={"Authorization": "Bearer calendar-token"},
        ),
    ])

    client = OpenAI()

    # Tools from ALL servers are fetched in parallel and merged
    tools = await adapter.get_tool_definitions()

    messages = [{"role": "user", "content": "What's the weather today and what's on my calendar?"}]
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        tools=tools,
    )

    # Tool calls are automatically routed to the correct server
    if response.choices[0].message.tool_calls:
        results = await adapter.process_tool_calls(
            response.choices[0].message.tool_calls
        )
        messages.append(response.choices[0].message)
        messages.extend(results)

        final_response = client.chat.completions.create(
            model="gpt-4o",
            messages=messages,
            tools=tools,
        )
        print(final_response.choices[0].message.content)

asyncio.run(main())

Tool Name Collisions

When multiple servers expose tools with the same name, the adapter auto-prefixes them with the server name to avoid collisions:

# Both servers have a "search" tool
adapter = MCPToolAdapterOpenAI([
    MCPServerConfig(url="https://example.com/mcp/weather", name="weather"),
    MCPServerConfig(url="https://example.com/mcp/calendar", name="calendar"),
])

tools = await adapter.get_tool_definitions()
# "search" becomes "weather__search" and "calendar__search"

You can control this behavior:

# Custom separator
adapter = MCPToolAdapterOpenAI(configs, prefix_separator="-")
# "weather-search", "calendar-search"

# Disable auto-prefixing (will raise on collision)
adapter = MCPToolAdapterOpenAI(configs, auto_prefix_on_collision=False)

# Manual prefix via config (always applied, regardless of collisions)
MCPServerConfig(url="...", tool_prefix="wx")
# "wx__search"

API Reference

MCPToolAdapter

from mcphero import MCPToolAdapter, GenericToolCall

adapter = MCPToolAdapter("https://api.mcphero.app/mcp/your-server-id")

Methods

Method Returns Description
discover_tools() list[MCPToolDefinition] Discover tools with routing metadata
process_tool_calls(tool_calls, return_errors=True) list[GenericToolResult] Execute tool calls and return generic results
call_tool(name, arguments) JsonRpcResponse Call a single tool by name
initialize_all() dict[str, JsonRpcResponse | Exception] Pre-initialize all server connections

MCPToolAdapterOpenAI

from mcphero import MCPToolAdapterOpenAI, MCPServerConfig

# Single server (URL string)
adapter = MCPToolAdapterOpenAI("https://api.mcphero.app/mcp/your-server-id")

# Single server (config)
adapter = MCPToolAdapterOpenAI(
    MCPServerConfig(
        url="https://api.mcphero.app/mcp/your-server-id",
        headers={"Authorization": "Bearer ..."},
    )
)

# Multiple servers
adapter = MCPToolAdapterOpenAI([
    MCPServerConfig(url="https://server-a.com/mcp", name="a"),
    MCPServerConfig(url="https://server-b.com/mcp", name="b"),
])

Methods

Method Returns Description
get_tool_definitions() list[ChatCompletionToolParam] Fetch tools from MCP server(s) as OpenAI tool schemas
process_tool_calls(tool_calls, return_errors=True) list[ChatCompletionToolMessageParam] Execute tool calls and return results for the conversation
discover_tools() list[MCPToolDefinition] Low-level: discover tools with routing metadata
call_tool(name, arguments) JsonRpcResponse Low-level: call a single tool by name
initialize_all() dict[str, JsonRpcResponse | Exception] Pre-initialize all server connections

MCPToolAdapterGemini

from mcphero import MCPToolAdapterGemini, MCPServerConfig

# Same constructor options as OpenAI adapter
adapter = MCPToolAdapterGemini("https://api.mcphero.app/mcp/your-server-id")

Methods

Method Returns Description
get_function_declarations() list[types.FunctionDeclaration] Fetch tools as Gemini FunctionDeclaration objects
get_tool() types.Tool Fetch tools as a Gemini Tool object
process_function_calls(function_calls, return_errors=True) list[types.Content] Execute function calls and return Content objects
process_function_calls_as_parts(function_calls, return_errors=True) list[types.Part] Execute function calls and return Part objects
discover_tools() list[MCPToolDefinition] Low-level: discover tools with routing metadata
call_tool(name, arguments) JsonRpcResponse Low-level: call a single tool by name

Error Handling

All adapters handle errors gracefully. When return_errors=True (default), failed tool calls return error messages that can be sent back to the model:

# Tool call fails -> returns error in result
results = await adapter.process_tool_calls(tool_calls, return_errors=True)
# [{"role": "tool", "tool_call_id": "...", "content": "{\"error\": \"HTTP error...\"}"}]

# Skip failed calls
results = await adapter.process_tool_calls(tool_calls, return_errors=False)

Links

License

MIT

Need a custom MCP server? Or a good, no bloat MCP server? Visit MCPHero and create one!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mcphero-2.0.0.tar.gz (103.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

mcphero-2.0.0-py3-none-any.whl (19.6 kB view details)

Uploaded Python 3

File details

Details for the file mcphero-2.0.0.tar.gz.

File metadata

  • Download URL: mcphero-2.0.0.tar.gz
  • Upload date:
  • Size: 103.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for mcphero-2.0.0.tar.gz
Algorithm Hash digest
SHA256 0196869b86b9e8ead944ee3bded8bfdeff816be703416856c6bc604dfafc7ed3
MD5 19ff17413ff673d31baa7e596f3ee5c8
BLAKE2b-256 d2cd958b0773a5aab549c3474d549b1f32debb2baadaa23a07dff9b025f7661d

See more details on using hashes here.

File details

Details for the file mcphero-2.0.0-py3-none-any.whl.

File metadata

  • Download URL: mcphero-2.0.0-py3-none-any.whl
  • Upload date:
  • Size: 19.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for mcphero-2.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 1ca05747f6e4c5fc782edfb43d966d60023bc01a4ba4276a296e61ced279a49d
MD5 90a9d161bf0d559304ec38763d6083f3
BLAKE2b-256 f2740d407feb7fce40bf0b506355fa567a43c9ba94a5f52d0bfb5577b5f1b907

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page