Skip to main content

Model context protocol connector for LangChain

Project description

Langchain Model Context Protocol Connector

Introduction

This project introduces tools to easily integrate Anthropic Model Context Protocol(MCP) with langchain. It embeds the MCP tools and resources into the system prompt and allows LLMs to interact with them through langchain.

MCP integrations with langchain expands the capabilities of LLM by providing access to an ecosystem of community build servers and additional resources. This means that we do not need to create custom tools for each LLM, but rather use the same tools across different LLMs.

For a detail example on how langchain_mcp_connect can be used, see this demo

What is the Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is an open-source standard released by Anthropic. The Model Context Protocol highlights the importance of tooling standardisation through open protocols. Specifically, it standardises how applications interact and provide context to LLMs. Just like how HTTP standardises how we communicate across the internet, MCP provides a standard protocol for LLM to interact with external tools. You can find out more about the MCP at https://github.com/modelcontextprotocol and https://modelcontextprotocol.io/introduction.

Example usage

The langchain_mcp_connect contain key methods to determine available tools and resources in the model context protocol. The schemas of input arguments for tools and resources are injected into the system prompt and form part of the initial prompt. Before starting, please ensure you meet the pre-requisites.

Pre requisites

  1. Install the python environment with uv
uv add langchain-mcp-connect langchain-openai langgraph
  1. Define your tool within claude_mcp_config.json file in the root directory. For a list of available tools see here.
{
  "mcpServers": {
    "git": {
      "command": "uvx",
      "args": ["mcp-server-git", "--repository", "./"]
    },
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "./"
      ]
    },
    "github": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-github"
      ],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ENV_GITHUB_PERSONAL_ACCESS_TOKEN"
      }
    }
  }
}
  1. Define environment variables. langchain_mcp_connect is able to inject secrets from the current environment. To do so, prefix the name of your environment variable with ENV_ in claude_mcp_config.json to inject envrionment variables into the current context. In the example above, ensure you have defined GITHUB_PERSONAL_ACCESS_TOKEN in your current environment with:
export GITHUB_PERSONAL_ACCESS_TOKEN="<YOUR_TOKEN_HERE>"

Usage

import argparse
import asyncio
import logging

from dotenv import load_dotenv
from langchain_core.messages import HumanMessage
from langchain_mcp_connect import MspToolPrompt, call_tool
from langchain_mcp_connect.get_servers import LangChainMcp
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

load_dotenv()

logging.basicConfig(level=logging.INFO)
log = logging.getLogger("LangChainMcp")


def list_tools() -> dict:
    """List all available tools.

    Calls all list tools method for all configured MCP servers.
    """
    mcp = LangChainMcp()
    return asyncio.run(mcp.fetch_all_server_tools())


def list_resources() -> dict:
    """List all available resources.

    Calls all list resources method for all configured MCP servers.
    """
    mcp = LangChainMcp()
    return asyncio.run(mcp.list_all_server_resources())


async def invoke_agent(
    model: ChatOpenAI, query: str, tools: dict, resources: dict
) -> dict:
    """Invoke the agent with the given query."""
    agent_executor = create_react_agent(model, [call_tool])

    # Create a system prompt and a human message
    system_prompt = MspToolPrompt(tools=tools, resources=resources).get_prompt()
    human_message = HumanMessage(content=query)

    # Invoke the agent
    r = await agent_executor.ainvoke(
        input=dict(messages=[system_prompt, human_message])
    )

    return r


if __name__ == "__main__":
    # Parse arguments
    parser = argparse.ArgumentParser(
        description="Langchain Model Context Protocol demo"
    )
    parser.add_argument("-q", "--query", type=str, help="Query to be executed")
    args = parser.parse_args()

    # Define the llm
    llm = ChatOpenAI(
        model="gpt-4o",
        model_kwargs={
            "max_tokens": 4096,
            "temperature": 0.0,
        },
    )

    # Invoke the agent
    response = asyncio.run(
        invoke_agent(llm, args.query, list_tools(), list_resources())
    )

    log.info(response)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_mcp_connect-1.0.1.tar.gz (29.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_mcp_connect-1.0.1-py3-none-any.whl (8.4 kB view details)

Uploaded Python 3

File details

Details for the file langchain_mcp_connect-1.0.1.tar.gz.

File metadata

File hashes

Hashes for langchain_mcp_connect-1.0.1.tar.gz
Algorithm Hash digest
SHA256 6181f16ea6edcf27ca933bb5a21d248f8d5a3266d4bfdb380ff693b89ac272a6
MD5 529bba92ed85f092adb89874c8b85227
BLAKE2b-256 6d88cced9deed3a45b7d8112a0411ef43287fbefa08a4da14e57a3a0640e2236

See more details on using hashes here.

File details

Details for the file langchain_mcp_connect-1.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_mcp_connect-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 a82e94211c5927f0b05d0d07dd0a5ee9942a0dc1c4d7601a29bbaa8fe41702c8
MD5 df978787c46775e2fcb7d07333dd3a28
BLAKE2b-256 e04668b4e0219038df3ccf8a9e626bf7d18efb4ae3cc0774c779ae0dac7b2d31

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page