Skip to main content

Model context protocol connector for LangChain

Project description

Langchain Model Context Protocol Connector

Introduction

This project introduces tools to easily integrate Anthropic Model Context Protocol(MCP) with langchain. It embeds the MCP tools and resources into the system prompt and allows LLMs to interact with them through langchain.

MCP integrations with langchain expands the capabilities of LLM by providing access to an ecosystem of community build servers and additional resources. This means that we do not need to create custom tools for each LLM, but rather use the same tools across different LLMs.

For a detail example on how langchain_mcp_connect can be used, see this demo

What is the Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is an open-source standard released by Anthropic. The Model Context Protocol highlights the importance of tooling standardisation through open protocols. Specifically, it standardises how applications interact and provide context to LLMs. Just like how HTTP standardises how we communicate across the internet, MCP provides a standard protocol for LLM to interact with external tools. You can find out more about the MCP at https://github.com/modelcontextprotocol and https://modelcontextprotocol.io/introduction.

Example usage

The langchain_mcp_connect contain key methods to determine available tools and resources in the model context protocol. The schemas of input arguments for tools and resources are injected into the system prompt and form part of the initial prompt. Before starting, please ensure you meet the pre-requisites.

Pre requisites

  1. Install the python environment with uv
uv add langchain-mcp-connect langchain-openai langgraph
  1. Define your tool within claude_mcp_config.json file in the root directory. For a list of available tools see here.
{
  "mcpServers": {
    "git": {
      "command": "uvx",
      "args": ["mcp-server-git", "--repository", "./"]
    },
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "./"
      ]
    },
    "github": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-github"
      ],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ENV_GITHUB_PERSONAL_ACCESS_TOKEN"
      }
    }
  }
}
  1. Define environment variables. langchain_mcp_connect is able to inject secrets from the current environment. To do so, prefix the name of your environment variable with ENV_ in claude_mcp_config.json to inject envrionment variables into the current context. In the example above, ensure you have defined GITHUB_PERSONAL_ACCESS_TOKEN in your current environment with:
export GITHUB_PERSONAL_ACCESS_TOKEN="<YOUR_TOKEN_HERE>"

Usage

import argparse
import asyncio
import logging

from dotenv import load_dotenv
from langchain_core.messages import HumanMessage
from langchain_mcp_connect import MspToolPrompt, call_tool
from langchain_mcp_connect.get_servers import LangChainMcp
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

load_dotenv()

logging.basicConfig(level=logging.INFO)
log = logging.getLogger("LangChainMcp")


def list_tools() -> dict:
    """List all available tools.

    Calls all list tools method for all configured MCP servers.
    """
    mcp = LangChainMcp()
    return asyncio.run(mcp.fetch_all_server_tools())


def list_resources() -> dict:
    """List all available resources.

    Calls all list resources method for all configured MCP servers.
    """
    mcp = LangChainMcp()
    return asyncio.run(mcp.list_all_server_resources())


async def invoke_agent(
    model: ChatOpenAI, query: str, tools: dict, resources: dict
) -> dict:
    """Invoke the agent with the given query."""
    agent_executor = create_react_agent(model, [call_tool])

    # Create a system prompt and a human message
    system_prompt = MspToolPrompt(tools=tools, resources=resources).get_prompt()
    human_message = HumanMessage(content=query)

    # Invoke the agent
    r = await agent_executor.ainvoke(
        input=dict(messages=[system_prompt, human_message])
    )

    return r


if __name__ == "__main__":
    # Parse arguments
    parser = argparse.ArgumentParser(
        description="Langchain Model Context Protocol demo"
    )
    parser.add_argument("-q", "--query", type=str, help="Query to be executed")
    args = parser.parse_args()

    # Define the llm
    llm = ChatOpenAI(
        model="gpt-4o",
        model_kwargs={
            "max_tokens": 4096,
            "temperature": 0.0,
        },
    )

    # Invoke the agent
    response = asyncio.run(
        invoke_agent(llm, args.query, list_tools(), list_resources())
    )

    log.info(response)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_mcp_connect-0.1.0.tar.gz (28.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_mcp_connect-0.1.0-py3-none-any.whl (8.4 kB view details)

Uploaded Python 3

File details

Details for the file langchain_mcp_connect-0.1.0.tar.gz.

File metadata

File hashes

Hashes for langchain_mcp_connect-0.1.0.tar.gz
Algorithm Hash digest
SHA256 8582fa0a56921de299044b7219a22835b34cd0598b53691b9a44b52ba32e9032
MD5 414b3811fc8b4648c474046bd1e62c18
BLAKE2b-256 810678ed62d94002f13174f628c67bd281dbbf5ad463743a3472913129aed78b

See more details on using hashes here.

File details

Details for the file langchain_mcp_connect-0.1.0-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_mcp_connect-0.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 42f3723a2e711809b958289d6417cd4be0051d12bbd2f5566cecf4b2307de3da
MD5 9ae4f55e89f56485e2e0cce233376d5b
BLAKE2b-256 45411cbba4efc6921909460310ec46da3477bfde4140746396f6e9801eabed14

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page