Skip to main content

LangChain Tool Client SDK (Python)

Project description

[!IMPORTANT]
This is a work in progress. The API is expected to change.

LangChain Tool Server

A dedicated tool server decouples the creation of specialized tools (e.g., for retrieving data from specific knowledge sources) from agent development. This separation enables different teams to contribute and manage tools independently. Agents can then be rapidly configured—by simply specifying a prompt and a set of accessible tools. This streamlined approach simplifies authentication and authorization and accelerates the deployment of agents into production.

Users working in a local environment that need MCP, can enable MCP support. In comparison to MCP, this specification uses stateless connection which makes it suitable for web deployment.

Why

  • 🌐 Stateless Web Deployment: Deploy as a web server without the need for persistent connections, allowing easy autoscaling and load balancing.
  • 📡 Simple REST Protocol: Leverage a straightforward REST API.
  • 🔐 Built-In Authentication: Out-of-the-box auth support, ensuring only authorized users can access tools.
  • 🛠️ Decoupled Tool Creation: In an enterprise setting, decouple the creation of specialized tools (like data retrieval from specific knowledge sources) from the agent configuration.
  • ⚙️ Works with LangChain tools: You can integrate existing LangChain tools with minimal effort.

Installation

pip install langchain-tool-server open-tool-client

Example Usage

Server

Add a server.py file to your project and define your tools with type hints.

from typing import Annotated
from starlette.requests import Request

from langchain_tool_server.tools import InjectedRequest
from langchain_tool_server import Server, Auth

app = Server()
auth = Auth()
app.add_auth(auth)


@auth.authenticate
async def authenticate(authorization: str) -> dict:
    """Authenticate incoming requests."""
    api_key = authorization

    # Replace this with actual authentication logic.
    api_key_to_user = {
        "1": {"permissions": ["authenticated", "group1"], "identity": "some-user"},
        "2": {"permissions": ["authenticated", "group2"], "identity": "another-user"},
    }
    # This is just an example. You should replace this with an actual
    # implementation.
    if not api_key or api_key not in api_key_to_user:
        raise auth.exceptions.HTTPException(detail="Not authorized")
    return api_key_to_user[api_key]


# Define tools

@app.add_tool(permissions=["group1"])
async def echo(msg: str) -> str:
    """Echo a message."""
    return msg + "!"


# Tool that has access to the request object
@app.add_tool(permissions=["authenticated"])
async def who_am_i(request: Annotated[Request, InjectedRequest]) -> str:
    """Get the user identity."""
    return request.user.identity


# You can also expose existing LangChain tools!
from langchain_core.tools import tool


@tool()
async def say_hello() -> str:
    """Say hello."""
    return "Hello"


# Add an existing LangChain tool to the server with permissions!
app.add_tool(say_hello, permissions=["group2"])

Client

Add a client.py file to your project and define your client.

import asyncio

from langchain_tool_client import get_async_client


async def main():
    if len(sys.argv) < 2:
        print(
            "Usage: uv run client.py url of langchain-tool-server  (i.e. http://localhost:8080/)>"
        )
        sys.exit(1)

    url = sys.argv[1]
    client = get_async_client(url=url)
    # Check server status
    print(await client.ok())  # "OK"
    print(await client.info())  # Server version and other information

    # List tools
    print(await client.tools.list())  # List of tools
    # Call a tool
    print(await client.tools.call("add", {"x": 1, "y": 2}))  # 3

    # Get as langchain tools
    select_tools = ["echo", "add"]
    tools = await client.tools.as_langchain_tools(select_tools)
    # Async
    print(await tools[0].ainvoke({"msg": "Hello"}))  # "Hello!"
    print(await tools[1].ainvoke({"x": 1, "y": 3}))  # 4


if __name__ == "__main__":
    import sys

    asyncio.run(main())

Sync Client

If you need a synchronous client, you can use the get_sync_client function.

from langchain_tool_client import get_sync_client

Using Existing LangChain Tools

If you have existing LangChain tools, you can expose them via the API by using the Server.tool method which will add the tool to the server.

This also gives you the option to add Authentication to an existing LangChain tool.

from open_tool_server import Server
from langchain_core.tools import tool

app = Server()

# Say you have some existing langchain tool
@tool()
async def say_hello() -> str:
    """Say hello."""
    return "Hello"

# This is how you expose it via the API
app.tool(
    say_hello,
    # You can include permissions if you're setting up Auth
    permissions=["group2"]
)

React Agent

Here's an example of how you can use the Open Tool Server with a prebuilt LangGraph react agent.

pip install langchain-anthropic langgraph
import os

from langchain_anthropic import ChatAnthropic
from langgraph.prebuilt import create_react_agent

from langchain_tool_client import get_sync_client

if "ANTHROPIC_API_KEY" not in os.environ:
    raise ValueError("Please set ANTHROPIC_API_KEY in the environment.")

tool_server = get_sync_client(
    url=... # URL of the tool server
    # headers=... # If you enabled auth
)
# Get tool definitions from the server
tools = tool_server.tools.as_langchain_tools()
print("Loaded tools:", tools)

model = ChatAnthropic(model="claude-3-5-sonnet-20240620")
agent = create_react_agent(model, tools=tools)
print()

user_message = "What is the temperature in Paris?"
messages = agent.invoke({"messages": [{"role": "user", "content": user_message}]})[
    "messages"
]

for message in messages:
    message.pretty_print()

MCP SSE

You can enable support for the MCP SSE protocol by passing enable_mcp=True to the Server constructor.

[!IMPORTANT]
Auth is not supported when using MCP SSE. So if you try to use auth and enable MCP, the server will raise an exception by design.

from langchain_tool_server import Server

app = Server(enable_mcp=True)


@app.add_tool()
async def echo(msg: str) -> str:
    """Echo a message."""
    return msg + "!"

This will mount an MCP SSE app at /mcp/sse. You can use the MCP client to connect to the server.

Use MCP client to connect to the server. The url should be the same as the server url with /mcp/sse appended.

from mcp import ClientSession

from mcp.client.sse import sse_client

async def main() -> None:
    # Please replace [host] with the actual host
    # IMPORTANT: Add /mcp/sse to the url!
    url = "[host]/mcp/sse" 
    async with sse_client(url=url) as streams:
        async with ClientSession(streams[0], streams[1]) as session:
            await session.initialize()
            tools = await session.list_tools()
            print(tools)
            result = await session.call_tool("echo", {"msg": "Hello, world!"})
            print(result)

Concepts

Tool Definition

A tool is a function that can be called by the client. It can be a simple function or a coroutine. The function signature should have type hints. The server will use these type hints to validate the input and output of the tool.

@app.add_tool()
async def add(x: int, y: int) -> int:
    """Add two numbers."""
    return x + y

Permissions

You can specify permissions for a tool. The client must have the required permissions to call the tool. If the client does not have the required permissions, the server will return a 403 Forbidden error.

@app.add_tool(permissions=["group1"])
async def add(x: int, y: int) -> int:
    """Add two numbers."""
    return x + y

A client must have all the required permissions to call the tool rather than a subset of the permissions.

Injected Request

A tool can request access to Starlette's Request object by using the InjectedRequest type hint. This can be useful for getting information about the request, such as the user's identity.

from typing import Annotated
from langchain_tool_server import InjectedRequest
from starlette.requests import Request


@app.add_tool(permissions=["group1"])
async def who_am_i(request: Annotated[Request, InjectedRequest]) -> str:
    """Return the user's identity"""
    # The `user` attribute can be used to retrieve the user object.
    # This object corresponds to the return value of the authentication function.
    return request.user.identity

Tool Discovery

A client can list all available tools by calling the tools.list method. The server will return a list of tools with their names and descriptions.

The client will only see tools for which they have the required permissions.

from langchain_tool_client import get_async_client

async def get_tools():
    # Headers are entirely dependent on how you implement your authentication
    # (see Auth section)
    client = get_async_client(url="http://localhost:8080/", headers={"authorization": "api key"})
    tools = await client.tools.list()
    # If you need langchain tools you can use the as_langchain_tools method
    langchain_tools = await client.tools.as_langchain_tools()
    # Do something
    ...

Auth

You can add authentication to the server by defining an authentication function.

Tutorial

If you want to add realistic authentication to your server, you can follow the 3rd tutorial in the Connecting an Authentication Provider series for LangGraph Platform. It's a separate project, but the tutorial has useful information for setting up authentication in your server.

Auth.authenticate

The authentication function is a coroutine that can request any of the following parameters:

Parameter Description
request The HTTP request object that encapsulates all details of the incoming client request, including metadata and routing info.
authorization A token or set of credentials used to authenticate the requestor and ensure secure access to the API or resource.
headers A dictionary of HTTP headers providing essential metadata (e.g., content type, encoding, user-agent) associated with the request.
body The payload of the request containing the data sent by the client, which may be formatted as JSON, XML, or form data.

The function should either:

  1. Return a user object if the request is authenticated.
  2. Raise an auth.exceptions.HTTPException if the request cannot be authenticated.
from langchain_tool_server import Auth

auth = Auth()

@auth.authenticate
async def authenticate(headers: dict[bytes, bytes]) -> dict:
    """Authenticate incoming requests."""
    is_authenticated = ... # Your authentication logic here
    if not is_authenticated:
        raise auth.exceptions.HTTPException(detail="Not authorized")
    
    return {
        "identity": "some-user",
        "permissions": ["authenticated", "group1"],
        # Add any other user information here
        "foo": "bar",
    } 

Awesome Servers

Would like to contribute your server to this list? Open a PR!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_tool_client-0.0.3.tar.gz (63.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_tool_client-0.0.3-py3-none-any.whl (8.4 kB view details)

Uploaded Python 3

File details

Details for the file langchain_tool_client-0.0.3.tar.gz.

File metadata

  • Download URL: langchain_tool_client-0.0.3.tar.gz
  • Upload date:
  • Size: 63.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.12.9

File hashes

Hashes for langchain_tool_client-0.0.3.tar.gz
Algorithm Hash digest
SHA256 d1dfc19dfd2fbe681f216292ad5c74bee63a442e025a3eaf5f6a9d1df4d0bbba
MD5 6fffbd578af290e5b1aad7a63b8646af
BLAKE2b-256 cd41e9de0bb0536368bb5431f7a0d6fa665a93b4c4e971e094a36aa50040f269

See more details on using hashes here.

File details

Details for the file langchain_tool_client-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_tool_client-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 e454019054b69c6423499579dc04836a3bd8c4c0648a9ca499ae27f5299318a9
MD5 48b15bbae4adb434014c231dcc71d1de
BLAKE2b-256 0322dba92255c8930d4688419118997c05d3ebc002677bd6fb1a220ff9f39f0d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page