Skip to main content

A practical utility library for LangChain and LangGraph development

Project description

langchain-dev-utils

PyPI License Python

langchain-dev-utils is a practical utility library focused on enhancing the development experience with LangChain and LangGraph. It provides a series of out-of-the-box utility functions that not only reduce repetitive code writing but also improve code consistency and readability. By streamlining development workflows, this library helps you prototype faster, iterate more smoothly, and create clearer, more reliable LLM-based AI applications.

Currently, English version. Chinese version please visit 中文文档

📚 Documentation

🚀 Installation

pip install -U langchain-dev-utils

# Install the full-featured version:
pip install -U langchain-dev-utils[standard]

📦 Core Features

1. Model Management

In langchain, the init_chat_model function can be used to initialize a chat model instance, but the model providers it supports are relatively limited. This module provides a registration function (register_model_provider/register_embeddings_provider) to register any model provider for subsequent model loading using load_chat_model / load_embeddings.

register_model_provider parameters:

  • provider_name: The model provider name, used as an identifier for subsequent model loading.
  • chat_model: The chat model, which can be a ChatModel or a string (currently supports "openai-compatible").
  • base_url: The API address of the model provider.

register_embeddings_provider parameters:

  • provider_name: The embeddings model provider name, used as an identifier for subsequent model loading.
  • embeddings_model: The embeddings model, which can be an Embeddings object or a string (currently supports "openai-compatible").
  • base_url: The API address of the model provider.

Usage Example:

# Chat Model Management
from langchain_dev_utils.chat_models import (
    register_model_provider,
    load_chat_model,
)

# Register a model provider
register_model_provider(
    provider_name="vllm",
    chat_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)

# Load the model
model = load_chat_model("vllm:qwen3-4b")
print(model.invoke("Hello"))

Embeddings Model Usage:

from langchain_dev_utils.embeddings import register_embeddings_provider, load_embeddings

register_embeddings_provider(
    provider_name="vllm",
    embeddings_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)
embeddings = load_embeddings("vllm:qwen3-embedding-4b")
emb = embeddings.embed_query("Hello")
print(emb)

Learn More: Model Management

2. Message Conversion

Includes the following features:

  • Merging reasoning content into the final response
  • Streaming content merging
  • Content formatting tools

Merging reasoning content into the final response:

from langchain_dev_utils.message_convert import (
    convert_reasoning_content_for_ai_message,
    convert_reasoning_content_for_chunk_iterator,
    merge_ai_message_chunk,
    format_sequence
)

response = model.invoke("Hello")

cleaned = convert_reasoning_content_for_ai_message(
    response, think_tag=("<think>", "</think>")
)

for chunk in convert_reasoning_content_for_chunk_iterator(
    model.stream("Hello")
):
    print(chunk.content, end="", flush=True)

Merging streaming responses:

chunks = list(model.stream("Hello"))
merged = merge_ai_message_chunk(chunks)

Formatting sequences:

text = format_sequence([
    "str1",
    "str2",
    "str3"
], separator="\n", with_num=True)

Learn More: Message Conversion

3. Tool Calling

Includes the following features:

  • Checking and parsing tool calls
  • Adding human-in-the-loop functionality

Usage Example:

import datetime
from langchain_core.tools import tool
from langchain_dev_utils.tool_calling import has_tool_calling, parse_tool_calling, human_in_the_loop
from langchain_core.messages import AIMessage
from typing import cast

@human_in_the_loop
def get_current_time() -> str:
    """Get the current timestamp"""
    return str(datetime.datetime.now().timestamp())

response = model.bind_tools([get_current_time]).invoke("What time is it now?")

if has_tool_calling(cast(AIMessage, response)):
    name, args = parse_tool_calling(
        cast(AIMessage, response), first_tool_call_only=True
    )
    print(name, args)

Learn More: Tool Calling

4. Agent Development

Includes the following features:

  • Predefined agent factory functions
  • Common middleware components

Usage Example:

from langchain_dev_utils.agents import create_agent
from langchain.agents import AgentState

agent = create_agent("vllm:qwen3-4b", tools=[get_current_time], name="time-agent")
response = agent.invoke({"messages": [{"role": "user", "content": "What time is it now?"}]})
print(response)

Middleware Usage:

from langchain_dev_utils.agents.middleware import (
    SummarizationMiddleware,
    LLMToolSelectorMiddleware,
    PlanMiddleware,
)

agent=create_agent(
    "vllm:qwen3-4b",
    name="plan-agent",
    middleware=[PlanMiddleware(), SummarizationMiddleware(model="vllm:qwen3-4b"), LLMToolSelectorMiddleware(model="vllm:qwen3-4b")]
)
response = agent.invoke({"messages": [{"role": "user", "content": "Give me a travel plan to New York"}]}))
print(response)

Learn More: Agent Development

5. State Graph Orchestration

Includes the following features:

  • Sequential graph orchestration
  • Parallel graph orchestration
from langchain_dev_utils.pipeline import sequential_pipeline, parallel_pipeline

# Build a sequential workflow
graph = sequential_pipeline(
    sub_graphs=[
        make_graph("graph1"),
        make_graph("graph2"),
        make_graph("graph3"),
    ],
    state_schema=State,
)

# Build a parallel workflow
graph = parallel_pipeline(
    sub_graphs=[
        make_graph("graph1"),
        make_graph("graph2"),
        make_graph("graph3"),
    ],
    state_schema=State,
)

Learn More: State Graph Orchestration

💬 Join the Community

  • 🐙 GitHub Repository — Browse the source code, submit Pull Requests
  • 🐞 Issue Tracker — Report bugs or suggest improvements
  • 💡 We welcome all forms of contribution — whether it's code, documentation, or usage examples. Let's build a more powerful and practical LangChain development ecosystem together!

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_dev_utils-1.0.4.tar.gz (172.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_dev_utils-1.0.4-py3-none-any.whl (39.0 kB view details)

Uploaded Python 3

File details

Details for the file langchain_dev_utils-1.0.4.tar.gz.

File metadata

  • Download URL: langchain_dev_utils-1.0.4.tar.gz
  • Upload date:
  • Size: 172.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.3

File hashes

Hashes for langchain_dev_utils-1.0.4.tar.gz
Algorithm Hash digest
SHA256 d9cd8a3f8298015978ab80f97d85ddc8a4ab45f007cddc1a65f9ad29227477cb
MD5 d32b6dcd76f0acff42409fdd879ae8e8
BLAKE2b-256 e7462e1119bc1634353de73e57acd1e2dec449c9b13da9556f0180efb136a5aa

See more details on using hashes here.

File details

Details for the file langchain_dev_utils-1.0.4-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_dev_utils-1.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 2750c39513e91c869b9104866bfbef56b4b1905f7ad115ce18b1249617f8bcf0
MD5 21ad4e4e917871c94629f4d13ab936df
BLAKE2b-256 8ecb982225041c5339022a4500e8a7a0ac0147701d5b7a5b11cfe42e7de9dfb3

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page