A practical utility library for LangChain and LangGraph development
Project description
🦜️🧰 langchain-dev-utils
A utility library for LangChain and LangGraph development.
This is the English version. For the Chinese version, please visit Chinese Documentation
langchain-dev-utils is a utility library focused on enhancing the development experience with LangChain and LangGraph. It provides a series of out-of-the-box utility functions that can both reduce repetitive code writing and improve code consistency and readability. By simplifying development workflows, this library helps you prototype faster, iterate more smoothly, and create clearer, more reliable LLM-based AI applications.
🚀 Installation
pip install -U langchain-dev-utils
# Install the full-featured version:
pip install -U langchain-dev-utils[standard]
📦 Core Features
1. Model Management
In langchain, the init_chat_model/init_embeddings functions can be used to initialize chat model instances/embedding model instances, but the model providers they support are relatively limited. This module provides a registration function (register_model_provider/register_embeddings_provider) to register any model provider for subsequent model loading using load_chat_model / load_embeddings.
1.1 Chat Model Management
Mainly consists of the following two functions:
register_model_provider: Register a chat model providerload_chat_model: Load a chat model
Example for integrating a qwen3-4b model deployed using vllm:
from langchain_dev_utils.chat_models import (
register_model_provider,
load_chat_model,
)
# Register model provider
register_model_provider(
provider_name="vllm",
chat_model="openai-compatible",
base_url="http://localhost:8000/v1",
)
# Load model
model = load_chat_model("vllm:qwen3-4b")
print(model.invoke("Hello"))
1.2 Embedding Model Management
Mainly consists of the following two functions:
register_embeddings_provider: Register an embedding model providerload_embeddings: Load an embedding model
Example for integrating a qwen3-embedding-4b model deployed using vllm:
from langchain_dev_utils.embeddings import register_embeddings_provider, load_embeddings
# Register embedding model provider
register_embeddings_provider(
provider_name="vllm",
embeddings_model="openai-compatible",
base_url="http://localhost:8000/v1",
)
# Load embedding model
embeddings = load_embeddings("vllm:qwen3-embedding-4b")
emb = embeddings.embed_query("Hello")
print(emb)
For more information about model management, please refer to: Chat Model Management, Embedding Model Management
2. Message Conversion
Includes the following features:
- Merge reasoning content into the final response
- Stream content merging
- Content formatting tools
2.1 Stream Content Merging
For stream responses obtained using stream() and astream(), you can use merge_ai_message_chunk to merge them into a final AIMessage.
from langchain_dev_utils.message_convert import merge_ai_message_chunk
chunks = list(model.stream("Hello"))
merged = merge_ai_message_chunk(chunks)
2.2 Format List Content
For a list, you can use format_sequence to format it.
from langchain_dev_utils.message_convert import format_sequence
text = format_sequence([
"str1",
"str2",
"str3"
], separator="\n", with_num=True)
For more information about message conversion, please refer to: Message Process, Formatting List Content
3. Tool Calling
Includes the following features:
- Check and parse tool calls
- Add human-in-the-loop functionality
3.1 Check and Parse Tool Calls
has_tool_calling and parse_tool_calling are used to check and parse tool calls.
import datetime
from langchain_core.tools import tool
from langchain_dev_utils.tool_calling import has_tool_calling, parse_tool_calling
@tool
def get_current_time() -> str:
"""Get the current timestamp"""
return str(datetime.datetime.now().timestamp())
response = model.bind_tools([get_current_time]).invoke("What time is it?")
if has_tool_calling(response):
name, args = parse_tool_calling(
response, first_tool_call_only=True
)
print(name, args)
3.2 Add Human-in-the-Loop Functionality
human_in_the_loop: For synchronous tool functionshuman_in_the_loop_async: For asynchronous tool functions
Both can accept a handler parameter for custom breakpoint return and response handling logic.
from langchain_dev_utils.tool_calling import human_in_the_loop
from langchain_core.tools import tool
import datetime
@human_in_the_loop
@tool
def get_current_time() -> str:
"""Get the current timestamp"""
return str(datetime.datetime.now().timestamp())
For more information about tool calling, please refer to: Add Human-in-the-Loop Support, Tool Call Handling
4. Agent Development
Includes the following features:
- Predefined agent factory functions
- Common middleware components
4.1 Agent Factory Functions
In LangChain v1, the official create_agent function can be used to create a single agent; its model parameter accepts either a BaseChatModel instance or a specific string (when a string is provided, only models supported by init_chat_model are allowed). To extend the flexibility of specifying models via string, this library provides an equivalent create_agent function that lets you designate any model supported by load_chat_model (registration required beforehand).
Usage example:
from langchain_dev_utils.agents import create_agent
from langchain.agents import AgentState
agent = create_agent("vllm:qwen3-4b", tools=[get_current_time], name="time-agent")
response = agent.invoke({"messages": [{"role": "user", "content": "What time is it?"}]})
print(response)
4.2 Middleware
Provides some commonly used middleware components. Below, we illustrate with ToolCallRepairMiddleware and PlanMiddleware.
ToolCallRepairMiddleware is used to repair invalid_tool_calls generated by large language models.
PlanMiddleware is used for agent planning.
from langchain_dev_utils.agents.middleware import (
ToolcallRepairMiddleware,
PlanMiddleware,
)
agent = create_agent(
"vllm:qwen3-4b",
name="plan-agent",
middleware=[ToolCallRepairMiddleware(), PlanMiddleware(
use_read_plan_tool=False
)]
)
response = agent.invoke({"messages": [{"role": "user", "content": "Give me a travel plan to New York"}]})
print(response)
For more information about agent development and all built-in middleware, please refer to: Pre-built Agent Functions, Middleware
5. State Graph Orchestration
Includes the following capabilities:
- Sequential graph orchestration
- Parallel graph orchestration
5.1 Sequential Graph Orchestration
Use create_sequential_pipeline to orchestrate multiple subgraphs in sequential order:
from langchain.agents import AgentState
from langchain_core.messages import HumanMessage
from langchain_dev_utils.agents import create_agent
from langchain_dev_utils.pipeline import create_sequential_pipeline
from langchain_dev_utils.chat_models import register_model_provider
register_model_provider(
provider_name="vllm",
chat_model="openai-compatible",
base_url="http://localhost:8000/v1",
)
# Build a sequential pipeline (all subgraphs executed in order)
graph = create_sequential_pipeline(
sub_graphs=[
create_agent(
model="vllm:qwen3-4b",
tools=[get_current_time],
system_prompt="You are a time-query assistant. You can only answer questions about the current time. If the question is unrelated to time, respond with 'I cannot answer that.'",
name="time_agent",
),
create_agent(
model="vllm:qwen3-4b",
tools=[get_current_weather],
system_prompt="You are a weather-query assistant. You can only answer questions about the current weather. If the question is unrelated to weather, respond with 'I cannot answer that.'",
name="weather_agent",
),
create_agent(
model="vllm:qwen3-4b",
tools=[get_current_user],
system_prompt="You are a user-query assistant. You can only answer questions about the current user. If the question is unrelated to the user, respond with 'I cannot answer that.'",
name="user_agent",
),
],
state_schema=AgentState,
)
response = graph.invoke({"messages": [HumanMessage("Hello")]})
print(response)
5.2 Parallel Graph Orchestration
Use create_parallel_pipeline to orchestrate multiple subgraphs in parallel:
from langchain_dev_utils.pipeline import create_parallel_pipeline
# Build a parallel pipeline (all subgraphs executed concurrently)
graph = create_parallel_pipeline(
sub_graphs=[
create_agent(
model="vllm:qwen3-4b",
tools=[get_current_time],
system_prompt="You are a time-query assistant. You can only answer questions about the current time. If the question is unrelated to time, respond with 'I cannot answer that.'",
name="time_agent",
),
create_agent(
model="vllm:qwen3-4b",
tools=[get_current_weather],
system_prompt="You are a weather-query assistant. You can only answer questions about the current weather. If the question is unrelated to weather, respond with 'I cannot answer that.'",
name="weather_agent",
),
create_agent(
model="vllm:qwen3-4b",
tools=[get_current_user],
system_prompt="You are a user-query assistant. You can only answer questions about the current user. If the question is unrelated to the user, respond with 'I cannot answer that.'",
name="user_agent",
),
],
state_schema=AgentState,
)
response = graph.invoke({"messages": [HumanMessage("Hello")]})
print(response)
For more information about state graph orchestration, please refer to: State Graph Orchestration
💬 Join the Community
- GitHub Repository — Browse source code, submit Pull Requests
- Issue Tracker — Report bugs or suggest improvements
- We welcome contributions in all forms — whether code, documentation, or usage examples. Let's build a more powerful and practical LangChain development ecosystem together!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_dev_utils-1.2.8.tar.gz.
File metadata
- Download URL: langchain_dev_utils-1.2.8.tar.gz
- Upload date:
- Size: 206.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.11 {"installer":{"name":"uv","version":"0.9.11"},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a915dc8ea2d027e4361d1026d991c451e1565d7265a4c4e8be69a3a37c4b980f
|
|
| MD5 |
b507afe899f04acbeb7a287d9eb8f1e4
|
|
| BLAKE2b-256 |
e55543849e0b9a1ff22e54cf00b3e963662f6d3e59a36722f05e79e7767d66fc
|
File details
Details for the file langchain_dev_utils-1.2.8-py3-none-any.whl.
File metadata
- Download URL: langchain_dev_utils-1.2.8-py3-none-any.whl
- Upload date:
- Size: 49.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.9.11 {"installer":{"name":"uv","version":"0.9.11"},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
c092354ff2d60f034b51587f342c7681f05a76d513a71b2b4725832d10925da1
|
|
| MD5 |
c70e580e55d97ec0aae924b0da7c1edc
|
|
| BLAKE2b-256 |
9573700a25d3c9cedf3dd1ab5be5820abd1ea6fd2b8c33e9f519f4a684a0d402
|