Skip to main content

Model context protocol connector for LangChain

Project description

Langchain Model Context Protocol Connector

Introduction

This project introduces tools to easily integrate Anthropic Model Context Protocol(MCP) with langchain. It embeds the MCP tools and resources into the system prompt and allows LLMs to interact with them through langchain.

MCP integrations with langchain expands the capabilities of LLM by providing access to an ecosystem of community build servers and additional resources. This means that we do not need to create custom tools for each LLM, but rather use the same tools across different LLMs.

For a detail example on how langchain_mcp_connect can be used, see this demo

What is the Model Context Protocol (MCP)?

The Model Context Protocol (MCP) is an open-source standard released by Anthropic. The Model Context Protocol highlights the importance of tooling standardisation through open protocols. Specifically, it standardises how applications interact and provide context to LLMs. Just like how HTTP standardises how we communicate across the internet, MCP provides a standard protocol for LLM to interact with external tools. You can find out more about the MCP at https://github.com/modelcontextprotocol and https://modelcontextprotocol.io/introduction.

Example usage

The langchain_mcp_connect contain key methods to determine available tools and resources in the model context protocol. The schemas of input arguments for tools and resources are injected into the system prompt and form part of the initial prompt. Before starting, please ensure you meet the pre-requisites.

Pre requisites

  1. Install the python environment with uv
uv add langchain-mcp-connect langchain-openai langgraph
  1. Define your tool within claude_mcp_config.json file in the root directory. For a list of available tools see here.
{
  "mcpServers": {
    "git": {
      "command": "uvx",
      "args": ["mcp-server-git", "--repository", "./"]
    },
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "./"
      ]
    },
    "github": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-github"
      ],
      "env": {
        "GITHUB_PERSONAL_ACCESS_TOKEN": "ENV_GITHUB_PERSONAL_ACCESS_TOKEN"
      }
    }
  }
}
  1. Define environment variables. langchain_mcp_connect is able to inject secrets from the current environment. To do so, prefix the name of your environment variable with ENV_ in claude_mcp_config.json to inject envrionment variables into the current context. In the example above, ensure you have defined GITHUB_PERSONAL_ACCESS_TOKEN in your current environment with:
export GITHUB_PERSONAL_ACCESS_TOKEN="<YOUR_TOKEN_HERE>"

Usage

import argparse
import asyncio
import logging

from dotenv import load_dotenv
from langchain_core.messages import HumanMessage
from langchain_mcp_connect import MspToolPrompt, call_tool
from langchain_mcp_connect.get_servers import LangChainMcp
from langchain_openai import ChatOpenAI
from langgraph.prebuilt import create_react_agent

load_dotenv()

logging.basicConfig(level=logging.INFO)
log = logging.getLogger("LangChainMcp")


def list_tools() -> dict:
    """List all available tools.

    Calls all list tools method for all configured MCP servers.
    """
    mcp = LangChainMcp()
    return asyncio.run(mcp.fetch_all_server_tools())


def list_resources() -> dict:
    """List all available resources.

    Calls all list resources method for all configured MCP servers.
    """
    mcp = LangChainMcp()
    return asyncio.run(mcp.list_all_server_resources())


async def invoke_agent(
    model: ChatOpenAI, query: str, tools: dict, resources: dict
) -> dict:
    """Invoke the agent with the given query."""
    agent_executor = create_react_agent(model, [call_tool])

    # Create a system prompt and a human message
    system_prompt = MspToolPrompt(tools=tools, resources=resources).get_prompt()
    human_message = HumanMessage(content=query)

    # Invoke the agent
    r = await agent_executor.ainvoke(
        input=dict(messages=[system_prompt, human_message])
    )

    return r


if __name__ == "__main__":
    # Parse arguments
    parser = argparse.ArgumentParser(
        description="Langchain Model Context Protocol demo"
    )
    parser.add_argument("-q", "--query", type=str, help="Query to be executed")
    args = parser.parse_args()

    # Define the llm
    llm = ChatOpenAI(
        model="gpt-4o",
        model_kwargs={
            "max_tokens": 4096,
            "temperature": 0.0,
        },
    )

    # Invoke the agent
    response = asyncio.run(
        invoke_agent(llm, args.query, list_tools(), list_resources())
    )

    log.info(response)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_mcp_connect-0.1.2.tar.gz (28.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_mcp_connect-0.1.2-py3-none-any.whl (8.4 kB view details)

Uploaded Python 3

File details

Details for the file langchain_mcp_connect-0.1.2.tar.gz.

File metadata

File hashes

Hashes for langchain_mcp_connect-0.1.2.tar.gz
Algorithm Hash digest
SHA256 e31ce6e745312246bd483f9d8d0ed5ae027ed7110af43bb23d49a945feffa4f4
MD5 2438fb29f49e7c7ac9760dc4d3e1b60f
BLAKE2b-256 9aaf7f7aed9ea84c834bd27323ef7c5ab44738d214f1619617e88101b07f4fb2

See more details on using hashes here.

File details

Details for the file langchain_mcp_connect-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_mcp_connect-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 154f2c62d5eaee3fdc969fcc2410c090cc9872571fc03f3db887bb44dab782f9
MD5 4c6a202c2968f67e1ae681efa6b14d97
BLAKE2b-256 c0ba664845ee3e9adbb7309055ce0a4584e64a6627aa8cf3fe9ce3649bc8c27e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page