Skip to main content

A practical utility library for LangChain and LangGraph development

Project description

🦜️🧰 langchain-dev-utils

A utility library for LangChain and LangGraph development.

📚 English中文

PyPI License: MIT Python Downloads Documentation

This is the English version. For the Chinese version, please visit Chinese Documentation

langchain-dev-utils is a utility library focused on enhancing the development experience with LangChain and LangGraph. It provides a series of out-of-the-box utility functions that can both reduce repetitive code writing and improve code consistency and readability. By simplifying development workflows, this library helps you prototype faster, iterate more smoothly, and create clearer, more reliable LLM-based AI applications.

🚀 Installation

pip install -U langchain-dev-utils

# Install the full-featured version:
pip install -U langchain-dev-utils[standard]

📦 Core Features

1. Model Management

In langchain, the init_chat_model/init_embeddings functions can be used to initialize chat model instances/embedding model instances, but the model providers they support are relatively limited. This module provides a registration function (register_model_provider/register_embeddings_provider) to register any model provider for subsequent model loading using load_chat_model / load_embeddings.

1.1 Chat Model Management

Mainly consists of the following two functions:

  • register_model_provider: Register a chat model provider
  • load_chat_model: Load a chat model

register_model_provider Parameters:

Parameter Type Required Default Description
provider_name str Yes - The name of the model provider, used as an identifier for loading models later.
chat_model ChatModel | str Yes - The chat model, which can be either a ChatModel instance or a string (currently only "openai-compatible" is supported).
base_url str No - The API endpoint URL of the model provider (applicable to both chat_model types, but primarily used when chat_model is a string with value "openai-compatible").
model_profiles dict No - Declares the capabilities and parameters supported by each model provided by this provider. The configuration corresponding to the model_name will be loaded and assigned to model.profile (e.g., fields such as max_input_tokens, tool_calling etc.).
compatibility_options dict No - Compatibility options for the model provider (only effective when chat_model is a string with value "openai-compatible"). Used to declare support for OpenAI-compatible features (e.g., tool_choice strategies, JSON mode, etc.) to ensure correct functional adaptation.

load_chat_model Parameters:

Parameter Type Required Default Description
model str Yes - Chat model name
model_provider str No - Chat model provider name
kwargs dict No - Additional parameters passed to the chat model class, e.g., temperature, top_p, etc.

Example for integrating a qwen3-4b model deployed using vllm:

from langchain_dev_utils.chat_models import (
    register_model_provider,
    load_chat_model,
)

# Register model provider
register_model_provider(
    provider_name="vllm",
    chat_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)

# Load model
model = load_chat_model("vllm:qwen3-4b")
print(model.invoke("Hello"))

1.2 Embedding Model Management

Mainly consists of the following two functions:

  • register_embeddings_provider: Register an embedding model provider
  • load_embeddings: Load an embedding model

register_embeddings_provider Parameters:

Parameter Type Required Default Description
provider_name str Yes - Embedding model provider name, used as an identifier for subsequent model loading
embeddings_model Embeddings | str Yes - Embedding model, can be Embeddings or a string (currently supports "openai-compatible")
base_url str No - The API address of the Embedding model provider (valid for both types of embeddings_model, but mainly used when embeddings_model is a string and is "openai-compatible")

load_embeddings Parameters:

Parameter Type Required Default Description
model str Yes - Embedding model name
provider str No - Embedding model provider name
kwargs dict No - Other additional parameters

Example for integrating a qwen3-embedding-4b model deployed using vllm:

from langchain_dev_utils.embeddings import register_embeddings_provider, load_embeddings

# Register embedding model provider
register_embeddings_provider(
    provider_name="vllm",
    embeddings_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)

# Load embedding model
embeddings = load_embeddings("vllm:qwen3-embedding-4b")
emb = embeddings.embed_query("Hello")
print(emb)

For more information about model management, please refer to: Chat Model Management, Embedding Model Management

2. Message Conversion

Includes the following features:

  • Merge reasoning content into the final response
  • Stream content merging
  • Content formatting tools

2.1 Stream Content Merging

For stream responses obtained using stream() and astream(), you can use merge_ai_message_chunk to merge them into a final AIMessage.

merge_ai_message_chunk Parameters:

Parameter Type Required Default Description
chunks List[AIMessageChunk] Yes - List of AIMessageChunk objects
chunks = list(model.stream("Hello"))
merged = merge_ai_message_chunk(chunks)

2.2 Format List Content

For a list, you can use format_sequence to format it.

format_sequence Parameters:

Parameter Type Required Default Description
inputs List Yes - A list containing any of the following types: langchain_core.messages, langchain_core.documents.Document, str
separator str No "-" String used to join the content
with_num bool No False If True, add a numeric prefix to each item (e.g., "1. Hello")
text = format_sequence([
    "str1",
    "str2",
    "str3"
], separator="\n", with_num=True)

For more information about message conversion, please refer to: Message Process, Formatting List Content

3. Tool Calling

Includes the following features:

  • Check and parse tool calls
  • Add human-in-the-loop functionality

3.1 Check and Parse Tool Calls

has_tool_calling and parse_tool_calling are used to check and parse tool calls.

has_tool_calling Parameters:

Parameter Type Required Default Description
message AIMessage Yes - AIMessage object

parse_tool_calling Parameters:

Parameter Type Required Default Description
message AIMessage Yes - AIMessage object
first_tool_call_only bool No False Whether to only check the first tool call
import datetime
from langchain_core.tools import tool
from langchain_dev_utils.tool_calling import has_tool_calling, parse_tool_calling

@tool
def get_current_time() -> str:
    """Get the current timestamp"""
    return str(datetime.datetime.now().timestamp())

response = model.bind_tools([get_current_time]).invoke("What time is it?")

if has_tool_calling(response):
    name, args = parse_tool_calling(
        response, first_tool_call_only=True
    )
    print(name, args)

3.2 Add Human-in-the-Loop Functionality

  • human_in_the_loop: For synchronous tool functions
  • human_in_the_loop_async: For asynchronous tool functions

Both can accept a handler parameter for custom breakpoint return and response handling logic.

from langchain_dev_utils import human_in_the_loop
from langchain_core.tools import tool
import datetime

@human_in_the_loop
@tool
def get_current_time() -> str:
    """Get the current timestamp"""
    return str(datetime.datetime.now().timestamp())

For more information about tool calling, please refer to: Add Human-in-the-Loop Support, Tool Call Handling

4. Agent Development

Includes the following features:

  • Predefined agent factory functions
  • Common middleware components

4.1 Agent Factory Functions

In LangChain v1, the officially provided create_agent function can be used to create a single agent, where the model parameter supports passing a BaseChatModel instance or a specific string (when passing a string, it is limited to the models supported by init_chat_model). To extend the flexibility of specifying models via strings, this library provides a functionally identical create_agent function, allowing you to directly use models supported by load_chat_model (requires prior registration).

create_agent Parameters:

Parameter Type Required Default Description
model str | BaseChatModel Yes - Model name or model instance. Can be a string identifier for a model registered with register_model_provider or a BaseChatModel instance.
Other parameters Various No - All other parameters are the same as in langchain.agents.create_agent

Usage example:

from langchain_dev_utils.agents import create_agent
from langchain.agents import AgentState

agent = create_agent("vllm:qwen3-4b", tools=[get_current_time], name="time-agent")
response = agent.invoke({"messages": [{"role": "user", "content": "What time is it?"}]})
print(response)

4.2 Middleware

Provides some commonly used middleware components. Below are examples of SummarizationMiddleware and PlanMiddleware.

SummarizationMiddleware is used for agent summarization.

PlanMiddleware is used for agent planning.

from langchain_dev_utils.agents.middleware import (
    SummarizationMiddleware,
    PlanMiddleware,
)

agent=create_agent(
    "vllm:qwen3-4b",
    name="plan-agent",
    middleware=[PlanMiddleware(), SummarizationMiddleware(model="vllm:qwen3-4b")]
)
response = agent.invoke({"messages": [{"role": "user", "content": "Give me a travel plan to New York"}]}))
print(response)

For more information about agent development and all built-in middleware, please refer to: Pre-built Agent Functions, Middleware

5. State Graph Orchestration

Includes the following features:

  • Sequential graph orchestration
  • Parallel graph orchestration

5.1 Sequential Graph Orchestration

Sequential graph orchestration: Uses create_sequential_pipeline, supported parameters:

create_sequential_pipeline Parameters:

Parameter Type Required Default Description
sub_graphs List[StateGraph] Yes - List of state graphs to combine (must be StateGraph instances)
state_schema type Yes - State Schema for the final generated graph
graph_name str No - Name of the final generated graph
context_schema type No - Context Schema for the final generated graph
input_schema type No - Input Schema for the final generated graph
output_schema type No - Output Schema for the final generated graph
checkpoint BaseCheckpointSaver No - LangGraph persistence Checkpoint
store BaseStore No - LangGraph persistence Store
cache BaseCache No - LangGraph Cache
from langchain.agents import AgentState
from langchain_core.messages import HumanMessage
from langchain_dev_utils.agents import create_agent
from langchain_dev_utils.pipeline import create_sequential_pipeline
from langchain_dev_utils.chat_models import register_model_provider

register_model_provider(
    provider_name="vllm",
    chat_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)

# Build sequential pipeline (all sub-graphs execute sequentially)
graph = create_sequential_pipeline(
    sub_graphs=[
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_time],
            system_prompt="You are a time query assistant, can only answer the current time. If the question is unrelated to time, please directly answer that you cannot answer.",
            name="time_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_weather],
            system_prompt="You are a weather query assistant, can only answer the current weather. If the question is unrelated to weather, please directly answer that you cannot answer.",
            name="weather_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_user],
            system_prompt="You are a user query assistant, can only answer the current user. If the question is unrelated to user, please directly answer that you cannot answer.",
            name="user_agent",
        ),
    ],
    state_schema=AgentState,
)

response = graph.invoke({"messages": [HumanMessage("Hello")]})
print(response)

5.2 Parallel Graph Orchestration

Parallel graph orchestration: Uses create_parallel_pipeline, supported parameters:

create_parallel_pipeline Parameters:

Parameter Type Required Default Description
sub_graphs List[StateGraph] Yes - List of state graphs to combine
state_schema type Yes - State Schema for the final generated graph
branches_fn Callable Yes - Parallel branch function, returns a list of Send objects to control parallel execution
graph_name str No - Name of the final generated graph
context_schema type No - Context Schema for the final generated graph
input_schema type No - Input Schema for the final generated graph
output_schema type No - Output Schema for the final generated graph
checkpoint BaseCheckpointSaver No - LangGraph persistence Checkpoint
store BaseStore No - LangGraph persistence Store
cache BaseCache No - LangGraph Cache
from langchain_dev_utils.pipeline import create_parallel_pipeline

# Build parallel pipeline (all sub-graphs execute in parallel)
graph = create_parallel_pipeline(
    sub_graphs=[
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_time],
            system_prompt="You are a time query assistant, can only answer the current time. If the question is unrelated to time, please directly answer that you cannot answer.",
            name="time_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_weather],
            system_prompt="You are a weather query assistant, can only answer the current weather. If the question is unrelated to weather, please directly answer that you cannot answer.",
            name="weather_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_user],
            system_prompt="You are a user query assistant, can only answer the current user. If the question is unrelated to user, please directly answer that you cannot answer.",
            name="user_agent",
        ),
    ],
    state_schema=AgentState,
)
response = graph.invoke({"messages": [HumanMessage("Hello")]})
print(response)

For more information about state graph orchestration, please refer to: State Graph Orchestration

💬 Join the Community

  • GitHub Repository — Browse source code, submit Pull Requests
  • Issue Tracker — Report bugs or suggest improvements
  • We welcome contributions in all forms — whether code, documentation, or usage examples. Let's build a more powerful and practical LangChain development ecosystem together!

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_dev_utils-1.2.5.tar.gz (180.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_dev_utils-1.2.5-py3-none-any.whl (48.7 kB view details)

Uploaded Python 3

File details

Details for the file langchain_dev_utils-1.2.5.tar.gz.

File metadata

  • Download URL: langchain_dev_utils-1.2.5.tar.gz
  • Upload date:
  • Size: 180.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.11 {"installer":{"name":"uv","version":"0.9.11"},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for langchain_dev_utils-1.2.5.tar.gz
Algorithm Hash digest
SHA256 f6e9ef28499ef15e05d22c7b0ef52fd55c6a1be20ac14c17eb01d3e48c1ce0b2
MD5 0dd265aaa6f46ac5cd609c461fb9448c
BLAKE2b-256 861601c391289f63f3102e4dd971ac26f91821979efaaf4e68b0a45955d2b1f7

See more details on using hashes here.

File details

Details for the file langchain_dev_utils-1.2.5-py3-none-any.whl.

File metadata

  • Download URL: langchain_dev_utils-1.2.5-py3-none-any.whl
  • Upload date:
  • Size: 48.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.11 {"installer":{"name":"uv","version":"0.9.11"},"python":null,"implementation":{"name":null,"version":null},"distro":null,"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for langchain_dev_utils-1.2.5-py3-none-any.whl
Algorithm Hash digest
SHA256 92f8db2ae0d0f3f049ca77318ed669746b10236c77578cc4eaa82929791310fe
MD5 b005cfef4674d44ed16f5e32267158a9
BLAKE2b-256 8a0320d2442a3d61fbc6123c93118bf6925efdc487506daa83dcfb4dd313b91e

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page