Skip to main content

Model Context Protocol (MCP) To LangChain Tools Conversion Utility

Project description

MCP To LangChain Tools Conversion Utility License: MIT pypi version

This package is intended to simplify the use of Model Context Protocol (MCP) server tools with LangChain / Python.

It contains a utility function convert_mcp_to_langchain_tools().
This function handles parallel initialization of specified multiple MCP servers and converts their available tools into a list of LangChain-compatible tools.

A typescript equivalent of this utility library is available here

Requirements

  • Python 3.11+

Installation

pip install langchain-mcp-tools

Quick Start

convert_mcp_to_langchain_tools() utility function accepts MCP server configurations that follow the same structure as Claude for Desktop, but only the contents of the mcpServers property, and is expressed as a dict, e.g.:

mcp_configs = {
    'filesystem': {
        'command': 'npx',
        'args': ['-y', '@modelcontextprotocol/server-filesystem', '.']
    },
    'fetch': {
        'command': 'uvx',
        'args': ['mcp-server-fetch']
    }
}

tools, cleanup = await convert_mcp_to_langchain_tools(
    mcp_configs
)

This utility function initializes all specified MCP servers in parallel, and returns LangChain Tools (tools: List[BaseTool]) by gathering available MCP tools from the servers, and by wrapping them into LangChain tools. It also returns an async callback function (cleanup: McpServerCleanupFn) to be invoked to close all MCP server sessions when finished.

The returned tools can be used with LangChain, e.g.:

# from langchain.chat_models import init_chat_model
llm = init_chat_model(
    model='claude-3-5-haiku-latest',
    model_provider='anthropic'
)

# from langgraph.prebuilt import create_react_agent
agent = create_react_agent(
    llm,
    tools
)

A simple and experimentable usage example can be found here

A more realistic usage example can be found here

Limitations

Currently, only text results of tool calls are supported.

Technical Details

It was very tricky (for me) to get the parallel MCP server initialization to work, including successful final resource cleanup...

I'm new to Python, so it is very possible that my ignorance is playing a big role here...
I'll summarize the difficulties I faced below. The source code is available here.
Any comments pointing out something I am missing would be greatly appreciated! (comment here)

  1. Challenge:

    A key requirement for parallel initialization is that each server must be initialized in its own dedicated task - there's no way around this as far as I know. However, this poses a challenge when combined with asynccontextmanager.

    • Resources management for stdio_client and ClientSession seems to require relying exclusively on asynccontextmanager for cleanup, with no manual cleanup options (based on the mcp python-sdk impl as of Jan 14, 2025)
    • Initializing multiple MCP servers in parallel requires a dedicated asyncio.Task per server
    • Server cleanup can be initiated later by a task other than the one that initialized the resources
  2. Solution:

    The key insight is to keep the initialization tasks alive throughout the session lifetime, rather than letting them complete after initialization.

    By using asyncio.Events for coordination, we can:

    • Allow parallel initialization while maintaining proper context management
    • Keep each initialization task running until explicit cleanup is requested
    • Ensure cleanup occurs in the same task that created the resources
    • Provide a clean interface for the caller to manage the lifecycle

    Alternative Considered: A generator/coroutine approach using finally block for cleanup was considered but rejected because:

    • It turned out that the finally block in a generator/coroutine can be executed by a different task than the one that ran the main body of the code
    • This breaks the requirement that AsyncExitStack.aclose() must be called from the same task that created the context
  3. Task Lifecycle:

    The following task lifecyle diagram illustrates how the above strategy was impelemented:

    [Task starts]
      ↓
    Initialize server & convert tools
      ↓
    Set ready_event (signals tools are ready)
      ↓
    await cleanup_event.wait() (keeps task alive)
      ↓
    When cleanup_event is set:
    exit_stack.aclose() (cleanup in original task)
    

This approach indeed enables parallel initialization while maintaining proper async resource lifecycle management through context managers. However, I'm afraid I'm twisting things around too much. It usually means I'm doing something very worng...

I think it is a natural assumption that MCP SDK is designed with consideration for parallel server initialization. I'm not sure what I'm missing... (FYI, with the TypeScript MCP SDK, parallel initialization was pretty straightforward)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_mcp_tools-0.1.0.tar.gz (9.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_mcp_tools-0.1.0-py3-none-any.whl (8.9 kB view details)

Uploaded Python 3

File details

Details for the file langchain_mcp_tools-0.1.0.tar.gz.

File metadata

  • Download URL: langchain_mcp_tools-0.1.0.tar.gz
  • Upload date:
  • Size: 9.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.0.1 CPython/3.12.8

File hashes

Hashes for langchain_mcp_tools-0.1.0.tar.gz
Algorithm Hash digest
SHA256 69354829360e006aa9d1f2fdb2b61851e2a27a8f149e1727af3c973b9525093b
MD5 bc1499a4cc6623a610d133dd0c85eeec
BLAKE2b-256 2789e813ada9a7e3cc6db5b7d06a8e466f0b07aeca8141d3c2028cb7b2e471bd

See more details on using hashes here.

File details

Details for the file langchain_mcp_tools-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_mcp_tools-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 aef0b786d5972cd37236baecf3b1e9a16c63697015a526af1da5feb5ec166a3d
MD5 ed07565e773437da66453e538fda191e
BLAKE2b-256 16e57517fa46eb8aa27da61a1f5764cc6315dcf92b2f9802f48d094be19ebed3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page