Skip to main content

A practical utility library for LangChain and LangGraph development

Project description

🦜️🧰 langchain-dev-utils

A utility library for LangChain and LangGraph development.

📚 English中文

PyPI License: MIT Python Downloads Documentation

This is the English version. For the Chinese version, please visit Chinese Documentation

langchain-dev-utils is a utility library focused on enhancing the development experience with LangChain and LangGraph. It provides a series of out-of-the-box utility functions that can both reduce repetitive code writing and improve code consistency and readability. By simplifying development workflows, this library helps you prototype faster, iterate more smoothly, and create clearer, more reliable LLM-based AI applications.

🚀 Installation

pip install -U langchain-dev-utils

# Install the full-featured version:
pip install -U langchain-dev-utils[standard]

📦 Core Features

1. Model Management

In langchain, the init_chat_model/init_embeddings functions can be used to initialize chat model instances/embedding model instances, but the model providers they support are relatively limited. This module provides a registration function (register_model_provider/register_embeddings_provider) to register any model provider for subsequent model loading using load_chat_model / load_embeddings.

1.1 Chat Model Management

Mainly consists of the following two functions:

  • register_model_provider: Register a chat model provider
  • load_chat_model: Load a chat model

Example for integrating a qwen3-4b model deployed using vllm:

from langchain_dev_utils.chat_models import (
    register_model_provider,
    load_chat_model,
)

# Register model provider
register_model_provider(
    provider_name="vllm",
    chat_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)

# Load model
model = load_chat_model("vllm:qwen3-4b")
print(model.invoke("Hello"))

1.2 Embedding Model Management

Mainly consists of the following two functions:

  • register_embeddings_provider: Register an embedding model provider
  • load_embeddings: Load an embedding model

Example for integrating a qwen3-embedding-4b model deployed using vllm:

from langchain_dev_utils.embeddings import register_embeddings_provider, load_embeddings

# Register embedding model provider
register_embeddings_provider(
    provider_name="vllm",
    embeddings_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)

# Load embedding model
embeddings = load_embeddings("vllm:qwen3-embedding-4b")
emb = embeddings.embed_query("Hello")
print(emb)

For more information about model management, please refer to: Chat Model Management, Embedding Model Management

2. Message Conversion

Includes the following features:

  • Merge reasoning content into the final response
  • Stream content merging
  • Content formatting tools

2.1 Stream Content Merging

For stream responses obtained using stream() and astream(), you can use merge_ai_message_chunk to merge them into a final AIMessage.

from langchain_dev_utils.message_convert import merge_ai_message_chunk

chunks = list(model.stream("Hello"))
merged = merge_ai_message_chunk(chunks)

2.2 Format List Content

For a list, you can use format_sequence to format it.

from langchain_dev_utils.message_convert import format_sequence
text = format_sequence([
    "str1",
    "str2",
    "str3"
], separator="\n", with_num=True)

For more information about message conversion, please refer to: Message Process, Formatting List Content

3. Tool Calling

Includes the following features:

  • Check and parse tool calls
  • Add human-in-the-loop functionality

3.1 Check and Parse Tool Calls

has_tool_calling and parse_tool_calling are used to check and parse tool calls.

import datetime
from langchain_core.tools import tool
from langchain_dev_utils.tool_calling import has_tool_calling, parse_tool_calling

@tool
def get_current_time() -> str:
    """Get the current timestamp"""
    return str(datetime.datetime.now().timestamp())

response = model.bind_tools([get_current_time]).invoke("What time is it?")

if has_tool_calling(response):
    name, args = parse_tool_calling(
        response, first_tool_call_only=True
    )
    print(name, args)

3.2 Add Human-in-the-Loop Functionality

  • human_in_the_loop: For synchronous tool functions
  • human_in_the_loop_async: For asynchronous tool functions

Both can accept a handler parameter for custom breakpoint return and response handling logic.

from langchain_dev_utils.tool_calling import human_in_the_loop
from langchain_core.tools import tool
import datetime

@human_in_the_loop
@tool
def get_current_time() -> str:
    """Get the current timestamp"""
    return str(datetime.datetime.now().timestamp())

For more information about tool calling, please refer to: Add Human-in-the-Loop Support, Tool Call Handling

4. Agent Development

Includes the following capabilities:

  • Multi-agent construction
  • Commonly used middleware components

4.1 Multi-Agent Construction

Wrapping an agent as a tool is a common implementation pattern in multi-agent systems, as elaborated in the official LangChain documentation. To support this pattern, this library provides a pre-built utility function wrap_agent_as_tool, which encapsulates an agent instance into a tool that can be invoked by other agents.

Usage Example:

import datetime
from langchain_dev_utils.agents import create_agent, wrap_agent_as_tool
from langchain.agents import AgentState


@tool
def get_current_time() -> str:
    """Get the current time."""
    return datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")


agent = create_agent("vllm:qwen3-4b", tools=[get_current_time], name="time-agent")
call_time_agent_tool = wrap_agent_as_tool(agent)
response = call_time_agent_tool.invoke(
    {"messages": [{"role": "user", "content": "What time is it now?"}]}
)
print(response)

4.2 Middleware

Provides several commonly used middleware components. Below are examples using ToolCallRepairMiddleware and PlanMiddleware.

  • ToolCallRepairMiddleware automatically repairs malformed tool calls found in the model's invalid_tool_calls output.
  • PlanMiddleware enables task planning capabilities for agents.
from langchain_dev_utils.agents.middleware import (
    ToolCallRepairMiddleware,
    PlanMiddleware,
)

agent = create_agent(
    "vllm:qwen3-4b",
    name="plan-agent",
    middleware=[
        ToolCallRepairMiddleware(),
        PlanMiddleware(use_read_plan_tool=False)
    ]
)
response = agent.invoke({"messages": [{"role": "user", "content": "Give me a travel plan for visiting New York."}]})
print(response)

For more details on agent development and a complete list of built-in middleware, please refer to:
Multi-Agent Construction,
Middleware

5. State Graph Orchestration

Includes the following capabilities:

  • Sequential graph orchestration
  • Parallel graph orchestration

5.1 Sequential Graph Orchestration

Use create_sequential_pipeline to orchestrate multiple subgraphs in sequential order:

from langchain.agents import AgentState
from langchain_core.messages import HumanMessage
from langchain_dev_utils.agents import create_agent
from langchain_dev_utils.pipeline import create_sequential_pipeline
from langchain_dev_utils.chat_models import register_model_provider

register_model_provider(
    provider_name="vllm",
    chat_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)

# Build a sequential pipeline (all subgraphs executed in order)
graph = create_sequential_pipeline(
    sub_graphs=[
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_time],
            system_prompt="You are a time-query assistant. You can only answer questions about the current time. If the question is unrelated to time, respond with 'I cannot answer that.'",
            name="time_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_weather],
            system_prompt="You are a weather-query assistant. You can only answer questions about the current weather. If the question is unrelated to weather, respond with 'I cannot answer that.'",
            name="weather_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_user],
            system_prompt="You are a user-query assistant. You can only answer questions about the current user. If the question is unrelated to the user, respond with 'I cannot answer that.'",
            name="user_agent",
        ),
    ],
    state_schema=AgentState,
)

response = graph.invoke({"messages": [HumanMessage("Hello")]})
print(response)

5.2 Parallel Graph Orchestration

Use create_parallel_pipeline to orchestrate multiple subgraphs in parallel:

from langchain_dev_utils.pipeline import create_parallel_pipeline

# Build a parallel pipeline (all subgraphs executed concurrently)
graph = create_parallel_pipeline(
    sub_graphs=[
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_time],
            system_prompt="You are a time-query assistant. You can only answer questions about the current time. If the question is unrelated to time, respond with 'I cannot answer that.'",
            name="time_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_weather],
            system_prompt="You are a weather-query assistant. You can only answer questions about the current weather. If the question is unrelated to weather, respond with 'I cannot answer that.'",
            name="weather_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_user],
            system_prompt="You are a user-query assistant. You can only answer questions about the current user. If the question is unrelated to the user, respond with 'I cannot answer that.'",
            name="user_agent",
        ),
    ],
    state_schema=AgentState,
)

response = graph.invoke({"messages": [HumanMessage("Hello")]})
print(response)

For more information about state graph orchestration, please refer to: State Graph Orchestration

💬 Join the Community

  • GitHub Repository — Browse source code, submit Pull Requests
  • Issue Tracker — Report bugs or suggest improvements
  • We welcome contributions in all forms — whether code, documentation, or usage examples. Let's build a more powerful and practical LangChain development ecosystem together!

Project details


Release history Release notifications | RSS feed

This version

1.2.9

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_dev_utils-1.2.9.tar.gz (206.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_dev_utils-1.2.9-py3-none-any.whl (49.0 kB view details)

Uploaded Python 3

File details

Details for the file langchain_dev_utils-1.2.9.tar.gz.

File metadata

  • Download URL: langchain_dev_utils-1.2.9.tar.gz
  • Upload date:
  • Size: 206.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.11 {"installer":{"name":"uv","version":"0.9.11"},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for langchain_dev_utils-1.2.9.tar.gz
Algorithm Hash digest
SHA256 f9520424bf9ffb833c9cc917179441880552ebf25fe785c57f2bb59b33dab3bf
MD5 79221b45db24ed90208e1fdf3dc6097d
BLAKE2b-256 d982eafc8ade492aae68ffc0cbc30d0d86d49501182798a7e6ddf019617de075

See more details on using hashes here.

File details

Details for the file langchain_dev_utils-1.2.9-py3-none-any.whl.

File metadata

  • Download URL: langchain_dev_utils-1.2.9-py3-none-any.whl
  • Upload date:
  • Size: 49.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.11 {"installer":{"name":"uv","version":"0.9.11"},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for langchain_dev_utils-1.2.9-py3-none-any.whl
Algorithm Hash digest
SHA256 c50d188872f5ad165cd889931dbcf6a064912438113b1837a34452a9b8443161
MD5 65bfb502f61d35faf7a02a136eb03314
BLAKE2b-256 87cea22d8de0a73b1732803dda20571932f41265494a664a700b847b8c2abfc2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page