Skip to main content

A practical utility library for LangChain and LangGraph development

Project description

🦜️🧰 langchain-dev-utils

A utility library for LangChain and LangGraph development.

📚 English中文

PyPI License: MIT Python Downloads Documentation

This is the English version. For the Chinese version, please visit 中文文档

langchain-dev-utils is a utility library focused on enhancing the development experience of LangChain and LangGraph. It provides a series of ready-to-use utility functions that can reduce repetitive code writing and improve code consistency and readability. By simplifying the development workflow, this library can help you build prototypes faster, iterate more smoothly, and create clearer and more reliable AI applications based on large language models.

🚀 Installation

pip install -U langchain-dev-utils

# Install full-featured version:
pip install -U langchain-dev-utils[standard]

📦 Core Features

1. Model Management

In langchain, the init_chat_model/init_embeddings functions can be used to initialize chat model instances/embedding model instances, but they support a limited number of model providers. This module provides registration functions (register_model_provider/register_embeddings_provider) to easily register any model provider for later use with load_chat_model / load_embeddings for model loading.

1.1 Chat Model Management

There are two main functions:

  • register_model_provider: Register a chat model provider
  • load_chat_model: Load a chat model

Assuming you want to use the qwen3-4b model deployed with vllm, the reference code is as follows:

from langchain_dev_utils.chat_models import (
    register_model_provider,
    load_chat_model,
)

# Register model provider
register_model_provider(
    provider_name="vllm",
    chat_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)

# Load model
model = load_chat_model("vllm:qwen3-4b")
print(model.invoke("Hello"))

1.2 Embedding Model Management

There are two main functions:

  • register_embeddings_provider: Register an embedding model provider
  • load_embeddings: Load an embedding model

Assuming you want to use the qwen3-embedding-4b model deployed with vllm, the reference code is as follows:

from langchain_dev_utils.embeddings import register_embeddings_provider, load_embeddings

# Register embedding model provider
register_embeddings_provider(
    provider_name="vllm",
    embeddings_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)

# Load embedding model
embeddings = load_embeddings("vllm:qwen3-embedding-4b")
emb = embeddings.embed_query("Hello")
print(emb)

2. Message Conversion

Includes the following features:

  • Merge chain-of-thought content into final responses
  • Stream content merging
  • Content formatting tools

2.1 Stream Content Merging

For streaming responses obtained using stream() and astream(), you can use merge_ai_message_chunk to merge them into a final AIMessage.

from langchain_dev_utils.message_convert import merge_ai_message_chunk
chunks = list(model.stream("Hello"))
merged = merge_ai_message_chunk(chunks)

2.2 Format List Content

For a list, you can use format_sequence to format it.

from langchain_dev_utils.message_convert import format_sequence
text = format_sequence([
    "str1",
    "str2",
    "str3"
], separator="\n", with_num=True)

3. Tool Calling

Includes the following features:

  • Check and parse tool calls
  • Add human-in-the-loop functionality

3.1 Check and Parse Tool Calls

has_tool_calling and parse_tool_calling are used to check and parse tool calls.

import datetime
from langchain_core.tools import tool
from langchain_dev_utils.tool_calling import has_tool_calling, parse_tool_calling

@tool
def get_current_time() -> str:
    """Get current timestamp"""
    return str(datetime.datetime.now().timestamp())

response = model.bind_tools([get_current_time]).invoke("What time is it now?")

if has_tool_calling(response):
    name, args = parse_tool_calling(
        response, first_tool_call_only=True
    )
    print(name, args)

3.2 Add Human-in-the-Loop Functionality

  • human_in_the_loop: For synchronous tool functions
  • human_in_the_loop_async: For asynchronous tool functions

Both can accept a handler parameter for custom breakpoint return and response handling logic.

from langchain_dev_utils.tool_calling import human_in_the_loop
from langchain_core.tools import tool
import datetime

@human_in_the_loop
@tool
def get_current_time() -> str:
    """Get current timestamp"""
    return str(datetime.datetime.now().timestamp())

4. Agent Development

Includes the following features:

  • Multi-agent construction
  • Common middleware components

4.1 Multi-Agent Construction

Wrapping agents as tools is a common implementation pattern in multi-agent systems, which is detailed in the official LangChain documentation. To this end, this library provides a pre-built function wrap_agent_as_tool to implement this pattern, which can wrap an agent instance into a tool that can be called by other agents.

Usage example:

import datetime
from langchain_dev_utils.agents import create_agent, wrap_agent_as_tool
from langchain.agents import AgentState

@tool
def get_current_time() -> str:
    """Get current time"""
    return datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")

time_agent = create_agent("vllm:qwen3-4b", tools=[get_current_time], name="time-agent")
call_time_agent_tool = wrap_agent_as_tool(time_agent)  

agent = create_agent(
    "vllm:qwen3-4b",
    name="agent",
    tools=[call_time_agent_tool],
)
response = agent.invoke(
    {"messages": [{"role": "user", "content": "What time is it now?"}]}
)
print(response)

4.2 Middleware

Provides some common middleware components. Below are examples using ToolCallRepairMiddleware and PlanMiddleware.

ToolCallRepairMiddleware is used to fix invaild_tool_calls content from large models.

PlanMiddleware is used for agent planning.

from langchain_dev_utils.agents.middleware import (
    ToolCallRepairMiddleware,
    PlanMiddleware,
)

agent=create_agent(
    "vllm:qwen3-4b",
    name="plan-agent",
    middleware=[ToolCallRepairMiddleware(), PlanMiddleware(
        use_read_plan_tool=False
    )]
)
response = agent.invoke({"messages": [{"role": "user", "content": "Give me a travel plan to New York"}]}))
print(response)

5. State Graph Orchestration

Includes the following features:

  • Sequential graph orchestration
  • Parallel graph orchestration

5.1 Sequential Graph Orchestration

Using create_sequential_pipeline, you can orchestrate multiple subgraphs in sequence:

from langchain.agents import AgentState
from langchain_core.messages import HumanMessage
from langchain_dev_utils.agents import create_agent
from langchain_dev_utils.pipeline import create_sequential_pipeline
from langchain_dev_utils.chat_models import register_model_provider

register_model_provider(
    provider_name="vllm",
    chat_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)

# Build sequential pipeline (all subgraphs execute in sequence)
graph = create_sequential_pipeline(
    sub_graphs=[
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_time],
            system_prompt="You are a time query assistant, you can only answer the current time. If this question is not related to time, please directly answer that you cannot answer",
            name="time_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_weather],
            system_prompt="You are a weather query assistant, you can only answer the current weather. If this question is not related to weather, please directly answer that you cannot answer",
            name="weather_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_user],
            system_prompt="You are a user query assistant, you can only answer the current user. If this question is not related to users, please directly answer that you cannot answer",
            name="user_agent",
        ),
    ],
    state_schema=AgentState,
)

response = graph.invoke({"messages": [HumanMessage("Hello")]})
print(response)

5.2 Parallel Graph Orchestration

Using create_parallel_pipeline, you can orchestrate multiple subgraphs in parallel:

from langchain_dev_utils.pipeline import create_parallel_pipeline

# Build parallel pipeline (all subgraphs execute in parallel)
graph = create_parallel_pipeline(
    sub_graphs=[
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_time],
            system_prompt="You are a time query assistant, you can only answer the current time. If this question is not related to time, please directly answer that you cannot answer",
            name="time_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_weather],
            system_prompt="You are a weather query assistant, you can only answer the current weather. If this question is not related to weather, please directly answer that you cannot answer",
            name="weather_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_user],
            system_prompt="You are a user query assistant, you can only answer the current user. If this question is not related to users, please directly answer that you cannot answer",
            name="user_agent",
        ),
    ],
    state_schema=AgentState,
)
response = graph.invoke({"messages": [HumanMessage("Hello")]})
print(response)

💬 Join the Community

  • GitHub Repository — Browse source code, submit Pull Requests
  • Issue Tracker — Report bugs or suggest improvements
  • We welcome all forms of contributions — whether code, documentation, or usage examples. Let's build a more powerful and practical LangChain development ecosystem together

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_dev_utils-1.2.13.tar.gz (222.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_dev_utils-1.2.13-py3-none-any.whl (50.5 kB view details)

Uploaded Python 3

File details

Details for the file langchain_dev_utils-1.2.13.tar.gz.

File metadata

  • Download URL: langchain_dev_utils-1.2.13.tar.gz
  • Upload date:
  • Size: 222.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.11 {"installer":{"name":"uv","version":"0.9.11"},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for langchain_dev_utils-1.2.13.tar.gz
Algorithm Hash digest
SHA256 fcc7330fe207b97535a622616767a693ff07ce658a3c4b872e95e4ccf238b62e
MD5 d00c225c71eef6965332468556badd2f
BLAKE2b-256 f118b1f542728fe53f5681031cc315a05526d4077b42d37c570075681c0f9f0c

See more details on using hashes here.

File details

Details for the file langchain_dev_utils-1.2.13-py3-none-any.whl.

File metadata

  • Download URL: langchain_dev_utils-1.2.13-py3-none-any.whl
  • Upload date:
  • Size: 50.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.11 {"installer":{"name":"uv","version":"0.9.11"},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for langchain_dev_utils-1.2.13-py3-none-any.whl
Algorithm Hash digest
SHA256 98f9c028fa0ec670ec0d50ff75f5f95be8ee728afa8b09a052a58d48035f2b3e
MD5 9e90358d948ce478a37459233e32efa9
BLAKE2b-256 193b893a4d4549adfde3d416e7d4580bf1a778dc6d6172a27e30bd5669f9ca5d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page