Skip to main content

DeepMCPAgent: LangChain/LangGraph agents powered by MCP tools over FastMCP.

Project description

DeepMCPAgent Logo

๐Ÿค– DeepMCPAgent

Model-agnostic LangChain/LangGraph agents powered entirely by MCP tools over HTTP/SSE.

Docs Python License Status

Deep MCP Agents on Product Hunt

Discover MCP tools dynamically. Bring your own LangChain model. Build production-ready agentsโ€”fast.

๐Ÿ“š Documentation โ€ข ๐Ÿ›  Issues


โœจ Why DeepMCPAgent?

  • ๐Ÿ”Œ Zero manual tool wiring โ€” tools are discovered dynamically from MCP servers (HTTP/SSE)
  • ๐ŸŒ External APIs welcome โ€” connect to remote MCP servers (with headers/auth)
  • ๐Ÿง  Model-agnostic โ€” pass any LangChain chat model instance (OpenAI, Anthropic, Ollama, Groq, local, โ€ฆ)
  • โšก DeepAgents (optional) โ€” if installed, you get a deep agent loop; otherwise robust LangGraph ReAct fallback
  • ๐Ÿ› ๏ธ Typed tool args โ€” JSON-Schema โ†’ Pydantic โ†’ LangChain BaseTool (typed, validated calls)
  • ๐Ÿงช Quality bar โ€” mypy (strict), ruff, pytest, GitHub Actions, docs

MCP first. Agents shouldnโ€™t hardcode tools โ€” they should discover and call them. DeepMCPAgent builds that bridge.


๐Ÿš€ Installation

Install from PyPI:

pip install "deepmcpagent[deep]"

This installs DeepMCPAgent with DeepAgents support (recommended) for the best agent loop. Other optional extras:

  • dev โ†’ linting, typing, tests
  • docs โ†’ MkDocs + Material + mkdocstrings
  • examples โ†’ dependencies used by bundled examples
# install with deepagents + dev tooling
pip install "deepmcpagent[deep,dev]"

โš ๏ธ If youโ€™re using zsh, remember to quote extras:

pip install "deepmcpagent[deep,dev]"

๐Ÿš€ Quickstart

1) Start a sample MCP server (HTTP)

python examples/servers/math_server.py

This serves an MCP endpoint at: http://127.0.0.1:8000/mcp

2) Run the example agent (with fancy console output)

python examples/use_agent.py

What youโ€™ll see:

screenshot


๐Ÿง‘โ€๐Ÿ’ป Bring-Your-Own Model (BYOM)

DeepMCPAgent lets you pass any LangChain chat model instance (or a provider id string if you prefer init_chat_model):

import asyncio
from deepmcpagent import HTTPServerSpec, build_deep_agent

# choose your model:
# from langchain_openai import ChatOpenAI
# model = ChatOpenAI(model="gpt-4.1")

# from langchain_anthropic import ChatAnthropic
# model = ChatAnthropic(model="claude-3-5-sonnet-latest")

# from langchain_community.chat_models import ChatOllama
# model = ChatOllama(model="llama3.1")

async def main():
    servers = {
        "math": HTTPServerSpec(
            url="http://127.0.0.1:8000/mcp",
            transport="http",    # or "sse"
            # headers={"Authorization": "Bearer <token>"},
        ),
    }

    graph, _ = await build_deep_agent(
        servers=servers,
        model=model,
        instructions="Use MCP tools precisely."
    )

    out = await graph.ainvoke({"messages":[{"role":"user","content":"add 21 and 21 with tools"}]})
    print(out)

asyncio.run(main())

Tip: If you pass a string like "openai:gpt-4.1", weโ€™ll call LangChainโ€™s init_chat_model() for you (and it will read env vars like OPENAI_API_KEY). Passing a model instance gives you full control.


๐Ÿ–ฅ๏ธ CLI (no Python required)

# list tools from one or more HTTP servers
deepmcpagent list-tools \
  --http name=math url=http://127.0.0.1:8000/mcp transport=http \
  --model-id "openai:gpt-4.1"

# interactive agent chat (HTTP/SSE servers only)
deepmcpagent run \
  --http name=math url=http://127.0.0.1:8000/mcp transport=http \
  --model-id "openai:gpt-4.1"

The CLI accepts repeated --http blocks; add header.X=Y pairs for auth:

--http name=ext url=https://api.example.com/mcp transport=http header.Authorization="Bearer TOKEN"

๐Ÿงฉ Architecture (at a glance)

โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”        list_tools / call_tool        โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”
โ”‚ LangChain/LLM  โ”‚  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–ถ โ”‚ FastMCP Client (HTTP/SSE)โ”‚
โ”‚  (your model)  โ”‚                                      โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜  tools (LC BaseTool)                               โ”‚
       โ”‚                                                              โ”‚
       โ–ผ                                                              โ–ผ
  LangGraph Agent                                    One or many MCP servers (remote APIs)
  (or DeepAgents)                                    e.g., math, github, search, ...
  • HTTPServerSpec(...) โ†’ FastMCP client (single client, multiple servers)
  • Tool discovery โ†’ JSON-Schema โ†’ Pydantic โ†’ LangChain BaseTool
  • Agent loop โ†’ DeepAgents (if installed) or LangGraph ReAct fallback

Full Architecture & Agent Flow

1) High-level Architecture (modules & data flow)

flowchart LR
    %% Groupings
    subgraph User["๐Ÿ‘ค User / App"]
      Q["Prompt / Task"]
      CLI["CLI (Typer)"]
      PY["Python API"]
    end

    subgraph Agent["๐Ÿค– Agent Runtime"]
      DIR["build_deep_agent()"]
      PROMPT["prompt.py\n(DEFAULT_SYSTEM_PROMPT)"]
      subgraph AGRT["Agent Graph"]
        DA["DeepAgents loop\n(if installed)"]
        REACT["LangGraph ReAct\n(fallback)"]
      end
      LLM["LangChain Model\n(instance or init_chat_model(provider-id))"]
      TOOLS["LangChain Tools\n(BaseTool[])"]
    end

    subgraph MCP["๐Ÿงฐ Tooling Layer (MCP)"]
      LOADER["MCPToolLoader\n(JSON-Schema โžœ Pydantic โžœ BaseTool)"]
      TOOLWRAP["_FastMCPTool\n(async _arun โ†’ client.call_tool)"]
    end

    subgraph FMCP["๐ŸŒ FastMCP Client"]
      CFG["servers_to_mcp_config()\n(mcpServers dict)"]
      MULTI["FastMCPMulti\n(fastmcp.Client)"]
    end

    subgraph SRV["๐Ÿ›  MCP Servers (HTTP/SSE)"]
      S1["Server A\n(e.g., math)"]
      S2["Server B\n(e.g., search)"]
      S3["Server C\n(e.g., github)"]
    end

    %% Edges
    Q -->|query| CLI
    Q -->|query| PY
    CLI --> DIR
    PY --> DIR

    DIR --> PROMPT
    DIR --> LLM
    DIR --> LOADER
    DIR --> AGRT

    LOADER --> MULTI
    CFG --> MULTI
    MULTI -->|list_tools| SRV
    LOADER --> TOOLS
    TOOLS --> AGRT

    AGRT <-->|messages| LLM
    AGRT -->|tool calls| TOOLWRAP
    TOOLWRAP --> MULTI
    MULTI -->|call_tool| SRV

    SRV -->|tool result| MULTI --> TOOLWRAP --> AGRT -->|final answer| CLI
    AGRT -->|final answer| PY

2) Runtime Sequence (end-to-end tool call)

sequenceDiagram
    autonumber
    participant U as User
    participant CLI as CLI/Python
    participant Builder as build_deep_agent()
    participant Loader as MCPToolLoader
    participant Graph as Agent Graph (DeepAgents or ReAct)
    participant LLM as LangChain Model
    participant Tool as _FastMCPTool
    participant FMCP as FastMCP Client
    participant S as MCP Server (HTTP/SSE)

    U->>CLI: Enter prompt
    CLI->>Builder: build_deep_agent(servers, model, instructions?)
    Builder->>Loader: get_all_tools()
    Loader->>FMCP: list_tools()
    FMCP->>S: HTTP(S)/SSE list_tools
    S-->>FMCP: tools + JSON-Schema
    FMCP-->>Loader: tool specs
    Loader-->>Builder: BaseTool[]
    Builder-->>CLI: (Graph, Loader)

    U->>Graph: ainvoke({messages:[user prompt]})
    Graph->>LLM: Reason over system + messages + tool descriptions
    LLM-->>Graph: Tool call (e.g., add(a=3,b=5))
    Graph->>Tool: _arun(a=3,b=5)
    Tool->>FMCP: call_tool("add", {a:3,b:5})
    FMCP->>S: POST /mcp tools.call("add", {...})
    S-->>FMCP: result { data: 8 }
    FMCP-->>Tool: result
    Tool-->>Graph: ToolMessage(content=8)

    Graph->>LLM: Continue with observations
    LLM-->>Graph: Final response "(3 + 5) * 7 = 56"
    Graph-->>CLI: messages (incl. final LLM answer)

3) Agent Control Loop (planning & acting)

stateDiagram-v2
    [*] --> AcquireTools
    AcquireTools: Discover MCP tools via FastMCP\n(JSON-Schema โžœ Pydantic โžœ BaseTool)
    AcquireTools --> Plan

    Plan: LLM plans next step\n(uses system prompt + tool descriptions)
    Plan --> CallTool: if tool needed
    Plan --> Respond: if direct answer sufficient

    CallTool: _FastMCPTool._arun\nโ†’ client.call_tool(name, args)
    CallTool --> Observe: receive tool result
    Observe: Parse result payload (data/text/content)
    Observe --> Decide

    Decide: More tools needed?
    Decide --> Plan: yes
    Decide --> Respond: no

    Respond: LLM crafts final message
    Respond --> [*]

4) Code Structure (types & relationships)

classDiagram
    class StdioServerSpec {
      +command: str
      +args: List[str]
      +env: Dict[str,str]
      +cwd: Optional[str]
      +keep_alive: bool
    }

    class HTTPServerSpec {
      +url: str
      +transport: Literal["http","streamable-http","sse"]
      +headers: Dict[str,str]
      +auth: Optional[str]
    }

    class FastMCPMulti {
      -_client: fastmcp.Client
      +client(): Client
    }

    class MCPToolLoader {
      -_multi: FastMCPMulti
      +get_all_tools(): List[BaseTool]
      +list_tool_info(): List[ToolInfo]
    }

    class _FastMCPTool {
      +name: str
      +description: str
      +args_schema: Type[BaseModel]
      -_tool_name: str
      -_client: Any
      +_arun(**kwargs) async
    }

    class ToolInfo {
      +server_guess: str
      +name: str
      +description: str
      +input_schema: Dict[str,Any]
    }

    class build_deep_agent {
      +servers: Mapping[str,ServerSpec]
      +model: ModelLike
      +instructions?: str
      +returns: (graph, loader)
    }

    StdioServerSpec <|-- ServerSpec
    HTTPServerSpec <|-- ServerSpec
    FastMCPMulti o--> ServerSpec : uses servers_to_mcp_config()
    MCPToolLoader o--> FastMCPMulti
    MCPToolLoader --> _FastMCPTool : creates
    _FastMCPTool ..> BaseTool
    build_deep_agent --> MCPToolLoader : discovery
    build_deep_agent --> _FastMCPTool : tools for agent

5) Deployment / Integration View (clusters & boundaries)

flowchart TD
    subgraph App["Your App / Service"]
      UI["CLI / API / Notebook"]
      Code["deepmcpagent (Python pkg)\n- config.py\n- clients.py\n- tools.py\n- agent.py\n- prompt.py"]
      UI --> Code
    end

    subgraph Cloud["LLM Provider(s)"]
      P1["OpenAI / Anthropic / Groq / Ollama..."]
    end

    subgraph Net["Network"]
      direction LR
      FMCP["FastMCP Client\n(HTTP/SSE)"]
      FMCP ---|mcpServers| Code
    end

    subgraph Servers["MCP Servers"]
      direction LR
      A["Service A (HTTP)\n/path: /mcp"]
      B["Service B (SSE)\n/path: /mcp"]
      C["Service C (HTTP)\n/path: /mcp"]
    end

    Code -->|init_chat_model or model instance| P1
    Code --> FMCP
    FMCP --> A
    FMCP --> B
    FMCP --> C

6) Error Handling & Observability (tool errors & retries)

flowchart TD
    Start([Tool Call]) --> Try{"client.call_tool(name,args)"}
    Try -- ok --> Parse["Extract data/text/content/result"]
    Parse --> Return[Return ToolMessage to Agent]
    Try -- raises --> Err["Tool/Transport Error"]
    Err --> Wrap["ToolMessage(status=error, content=trace)"]
    Wrap --> Agent["Agent observes error\nand may retry / alternate tool"]

These diagrams reflect the current implementation:

  • Model is required (string provider-id or LangChain model instance).
  • MCP tools only, discovered at runtime via FastMCP (HTTP/SSE).
  • Agent loop prefers DeepAgents if installed; otherwise LangGraph ReAct.
  • Tools are typed via JSON-Schema โžœ Pydantic โžœ LangChain BaseTool.
  • Fancy console output shows discovered tools, calls, results, and final answer.

๐Ÿงช Development

# install dev tooling
pip install -e ".[dev]"

# lint & type-check
ruff check .
mypy

# run tests
pytest -q

๐Ÿ›ก๏ธ Security & Privacy

  • Your keys, your model โ€” we donโ€™t enforce a provider; pass any LangChain model.
  • Use HTTP headers in HTTPServerSpec to deliver bearer/OAuth tokens to servers.

๐Ÿงฏ Troubleshooting

  • PEP 668: externally managed environment (macOS + Homebrew) Use a virtualenv:

    python3 -m venv .venv
    source .venv/bin/activate
    
  • 404 Not Found when connecting Ensure your server uses a path (e.g., /mcp) and your client URL includes it.

  • Tool calls failing / attribute errors Ensure youโ€™re on the latest version; our tool wrapper uses PrivateAttr for client state.

  • High token counts Thatโ€™s normal with tool-calling models. Use smaller models for dev.


๐Ÿ“„ License

Apache-2.0 โ€” see LICENSE.


โญ Stars

Star History Chart

๐Ÿ™ Acknowledgments

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deepmcpagent-0.5.0.tar.gz (157.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

deepmcpagent-0.5.0-py3-none-any.whl (27.3 kB view details)

Uploaded Python 3

File details

Details for the file deepmcpagent-0.5.0.tar.gz.

File metadata

  • Download URL: deepmcpagent-0.5.0.tar.gz
  • Upload date:
  • Size: 157.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for deepmcpagent-0.5.0.tar.gz
Algorithm Hash digest
SHA256 3eed23241e60e900a14e82472595284057517684775deb60676a6edd680bf1a0
MD5 119e261bf477d8b4ab0fcad284263b06
BLAKE2b-256 b7be45da213752bb7929e43f950bf5d40fab6736b77259fb18436ad560ea10f2

See more details on using hashes here.

Provenance

The following attestation bundles were made for deepmcpagent-0.5.0.tar.gz:

Publisher: publish.yml on cryxnet/DeepMCPAgent

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file deepmcpagent-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: deepmcpagent-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 27.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for deepmcpagent-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 0a614565c84af49fa4a92c1f1c237404fa45d61e6aac6c20bf674562711dc2e6
MD5 409529216e4ebe4d48ae694f98d12e35
BLAKE2b-256 ad9bcb3037dd816f32648f1a0761c3246c3ae0993178d68b09e49a6675293317

See more details on using hashes here.

Provenance

The following attestation bundles were made for deepmcpagent-0.5.0-py3-none-any.whl:

Publisher: publish.yml on cryxnet/DeepMCPAgent

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page