Skip to main content

A practical utility library for LangChain and LangGraph development

Project description

🦜️🔗 langchain-dev-utils

PyPI License: MIT Python Downloads Documentation

langchain-dev-utils is a practical utility library focused on enhancing the development experience with LangChain and LangGraph. It provides a series of out-of-the-box utility functions that not only reduce repetitive code writing but also improve code consistency and readability. By streamlining development workflows, this library helps you prototype faster, iterate more smoothly, and create clearer, more reliable LLM-based AI applications.

Currently, English version. Chinese version please visit 中文文档

📚 Documentation

🚀 Installation

pip install -U langchain-dev-utils

# Install the full-featured version:
pip install -U langchain-dev-utils[standard]

📦 Core Features

1. Model Management

In langchain, the init_chat_model function can be used to initialize a chat model instance, but the model providers it supports are relatively limited. This module provides a registration function (register_model_provider/register_embeddings_provider) to register any model provider for subsequent model loading using load_chat_model / load_embeddings.

register_model_provider parameters:

  • provider_name: The model provider name, used as an identifier for subsequent model loading.
  • chat_model: The chat model, which can be a ChatModel or a string (currently supports "openai-compatible").
  • base_url: The API address of the model provider.

register_embeddings_provider parameters:

  • provider_name: The embeddings model provider name, used as an identifier for subsequent model loading.
  • embeddings_model: The embeddings model, which can be an Embeddings object or a string (currently supports "openai-compatible").
  • base_url: The API address of the model provider.

Usage Example:

# Chat Model Management
from langchain_dev_utils.chat_models import (
    register_model_provider,
    load_chat_model,
)

# Register a model provider
register_model_provider(
    provider_name="vllm",
    chat_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)

# Load the model
model = load_chat_model("vllm:qwen3-4b")
print(model.invoke("Hello"))

Embeddings Model Usage:

from langchain_dev_utils.embeddings import register_embeddings_provider, load_embeddings

register_embeddings_provider(
    provider_name="vllm",
    embeddings_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)
embeddings = load_embeddings("vllm:qwen3-embedding-4b")
emb = embeddings.embed_query("Hello")
print(emb)

Learn More: Chat Model Management,Embeddings Model Management

2. Message Conversion

Includes the following features:

  • Merging reasoning content into the final response
  • Streaming content merging
  • Content formatting tools

Merging reasoning content into the final response:

from langchain_dev_utils.message_convert import (
    convert_reasoning_content_for_ai_message,
    convert_reasoning_content_for_chunk_iterator,
    merge_ai_message_chunk,
    format_sequence
)

response = model.invoke("Hello")

cleaned = convert_reasoning_content_for_ai_message(
    response, think_tag=("<think>", "</think>")
)

for chunk in convert_reasoning_content_for_chunk_iterator(
    model.stream("Hello")
):
    print(chunk.content, end="", flush=True)

Merging streaming responses:

chunks = list(model.stream("Hello"))
merged = merge_ai_message_chunk(chunks)

Formatting sequences:

text = format_sequence([
    "str1",
    "str2",
    "str3"
], separator="\n", with_num=True)

Learn More: Message Processing,Format List Content

3. Tool Calling

Includes the following features:

  • Checking and parsing tool calls
  • Adding human-in-the-loop functionality

Usage Example:

import datetime
from langchain_core.tools import tool
from langchain_dev_utils.tool_calling import has_tool_calling, parse_tool_calling, human_in_the_loop
from langchain_core.messages import AIMessage
from typing import cast

@human_in_the_loop
def get_current_time() -> str:
    """Get the current timestamp"""
    return str(datetime.datetime.now().timestamp())

response = model.bind_tools([get_current_time]).invoke("What time is it now?")

if has_tool_calling(cast(AIMessage, response)):
    name, args = parse_tool_calling(
        cast(AIMessage, response), first_tool_call_only=True
    )
    print(name, args)

Learn More: Adding Human-in-the-Loop Support,Tool Calling Processing

4. Agent Development

Includes the following features:

  • Predefined agent factory functions
  • Common middleware components

Usage Example:

from langchain_dev_utils.agents import create_agent
from langchain.agents import AgentState

agent = create_agent("vllm:qwen3-4b", tools=[get_current_time], name="time-agent")
response = agent.invoke({"messages": [{"role": "user", "content": "What time is it now?"}]})
print(response)

Middleware Usage:

from langchain_dev_utils.agents.middleware import (
    SummarizationMiddleware,
    PlanMiddleware,
)

agent=create_agent(
    "vllm:qwen3-4b",
    name="plan-agent",
    middleware=[PlanMiddleware(), SummarizationMiddleware(model="vllm:qwen3-4b")]
)
response = agent.invoke({"messages": [{"role": "user", "content": "Give me a travel plan to New York"}]}))
print(response)

Learn More: Pre-built Agent Functions,Middleware

5. State Graph Orchestration

Includes the following features:

  • Sequential graph orchestration
  • Parallel graph orchestration

Sequential Graph Orchestration:

from langchain.agents import AgentState
from langchain_core.messages import HumanMessage
from langchain_dev_utils.agents import create_agent
from langchain_dev_utils.pipeline import sequential_pipeline
from langchain_dev_utils.chat_models import register_model_provider

register_model_provider(
    provider_name="vllm",
    chat_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)

# Build a sequential pipeline (all subgraphs execute in order)
graph = sequential_pipeline(
    sub_graphs=[
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_time],
            system_prompt="You are a time query assistant. You can only answer the current time. If the question is unrelated to time, please directly reply that you cannot answer.",
            name="time_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_weather],
            system_prompt="You are a weather query assistant. You can only answer the current weather. If the question is unrelated to weather, please directly reply that you cannot answer.",
            name="weather_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_user],
            system_prompt="You are a user query assistant. You can only answer the current user. If the question is unrelated to the user, please directly reply that you cannot answer.",
            name="user_agent",
        ),
    ],
    state_schema=AgentState,
)

response = graph.invoke({"messages": [HumanMessage("Hello")]})
print(response)

Parallel Graph Orchestration:

from langchain_dev_utils.pipeline import parallel_pipeline

# Build a parallel pipeline (all subgraphs execute in parallel)
graph = parallel_pipeline(
    sub_graphs=[
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_time],
            system_prompt="You are a time query assistant. You can only answer the current time. If the question is unrelated to time, please directly reply that you cannot answer.",
            name="time_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_weather],
            system_prompt="You are a weather query assistant. You can only answer the current weather. If the question is unrelated to weather, please directly reply that you cannot answer.",
            name="weather_agent",
        ),
        create_agent(
            model="vllm:qwen3-4b",
            tools=[get_current_user],
            system_prompt="You are a user query assistant. You can only answer the current user. If the question is unrelated to the user, please directly reply that you cannot answer.",
            name="user_agent",
        ),
    ],
    state_schema=AgentState,
)
response = graph.invoke({"messages": [HumanMessage("Hello")]})
print(response)

Learn More: State Graph Orchestration Pipeline

💬 Join the Community

  • 🐙 GitHub Repository — Browse the source code, submit Pull Requests
  • 🐞 Issue Tracker — Report bugs or suggest improvements
  • 💡 We welcome all forms of contribution — whether it's code, documentation, or usage examples. Let's build a more powerful and practical LangChain development ecosystem together!

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_dev_utils-1.1.3.tar.gz (176.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_dev_utils-1.1.3-py3-none-any.whl (43.5 kB view details)

Uploaded Python 3

File details

Details for the file langchain_dev_utils-1.1.3.tar.gz.

File metadata

  • Download URL: langchain_dev_utils-1.1.3.tar.gz
  • Upload date:
  • Size: 176.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.9.5

File hashes

Hashes for langchain_dev_utils-1.1.3.tar.gz
Algorithm Hash digest
SHA256 d771119b3c535af763bf47721213e94a81973a308cbb3207b1717e85ec773945
MD5 4c2deaf0a3e35b233a732b6dab216086
BLAKE2b-256 7e26059d11a3bbef5439ef56ef30323bd90823c6a678812e8338e06f657c13b7

See more details on using hashes here.

File details

Details for the file langchain_dev_utils-1.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_dev_utils-1.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 98f3c1ff4bb042e796a9f0a78f0f71721936e8994abc1a6b0704f9d8dad81f8f
MD5 eb3f41cab6211b71f850cc1dcc169daf
BLAKE2b-256 aac0b7690100cda178823e160efa17a447792e280a19785460dad7c39c9730c8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page