Skip to main content

A practical utility library for LangChain and LangGraph development

Project description

langchain-dev-utils

PyPI License Python

langchain-dev-utils is a utility library focused on enhancing the development experience with LangChain and LangGraph. It provides a collection of ready-to-use utility functions that reduce repetitive code while improving code consistency and readability. By streamlining development workflows, this library helps you build prototypes faster, iterate more smoothly, and create clearer, more reliable AI applications powered by large language models.

📚 Documentation

🚀 Installation

pip install -U langchain-dev-utils

# For all features:
pip install -U langchain-dev-utils[standard]

📦 Core Features

1. Model Management

  • Register any chat model or embeddings provider
  • Unified interface with load_chat_model() / load_embeddings()
# Chat model management
from langchain_dev_utils.chat_models import (
    register_model_provider,
    load_chat_model,
)

# Register model provider
register_model_provider(
    provider_name="vllm",
    chat_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)

# Load model
model = load_chat_model("vllm:qwen3-4b")
print(model.invoke("Hello"))

# Embeddings management
from langchain_dev_utils.embeddings import register_embeddings_provider, load_embeddings

register_embeddings_provider(
    provider_name="vllm",
    embeddings_model="openai-compatible",
    base_url="http://localhost:8000/v1",
)
embeddings = load_embeddings("vllm:qwen3-embedding-4b")
emb = embeddings.embed_query("Hello")
print(emb)

2. Message Processing

  • Merge reasoning content into final response
  • Stream-aware chunk merging
  • Content formatting utilities
from langchain_dev_utils.message_convert import (
    convert_reasoning_content_for_ai_message,
    convert_reasoning_content_for_chunk_iterator,
    merge_ai_message_chunk,
    format_sequence
)

response = model.invoke("Hello")
# Merge reasoning content to final response
cleaned = convert_reasoning_content_for_ai_message(
    response, think_tag=("<think>", "</think>")
)

# Stream merge reasoning content
for chunk in convert_reasoning_content_for_chunk_iterator(
    model.stream("Hello")
):
    print(chunk.content, end="", flush=True)

# Merge streaming chunks
chunks = list(model.stream("Hello"))
merged = merge_ai_message_chunk(chunks)

# Format sequence
text = format_sequence([
    "str1",
    "str2",
    "str3"
], separator="\n", with_num=True)

3. Tool Calling

  • Check and parse tool calls
  • Human-in-the-loop functionality for tool execution
import datetime
from langchain_core.tools import tool
from langchain_dev_utils.tool_calling import has_tool_calling, parse_tool_calling, human_in_the_loop
from langchain_core.messages import AIMessage
from typing import cast

@human_in_the_loop
def get_current_time() -> str:
    """Get current timestamp"""
    return str(datetime.datetime.now().timestamp())

response = model.bind_tools([get_current_time]).invoke("What time is it?")

if has_tool_calling(cast(AIMessage, response)):
    name, args = parse_tool_calling(
        cast(AIMessage, response), first_tool_call_only=True
    )
    print(name, args)

4. Agent Development

  • Pre-built agent factory functions
  • Common middleware components
# Basic agent
from langchain_dev_utils.agents import create_agent
from langchain.agents import AgentState

agent = create_agent("vllm:qwen3-4b", tools=[get_current_time], name="time-agent")
response = agent.invoke({"messages": [{"role": "user", "content": "What time is it?"}]})
print(response)


# Middleware
from langchain_dev_utils.agents.middleware import (
    SummarizationMiddleware,
    LLMToolSelectorMiddleware,
    PlanMiddleware,
)

agent=create_agent(
    "vllm:qwen3-4b",
    name="plan-agent",
    middleware=[PlanMiddleware(), SummarizationMiddleware(), LLMToolSelectorMiddleware()]
)
response = agent.invoke({"messages": [{"role": "user", "content": "Give me a plan to travel to New York"}]}))
print(response)

5. State Graph Orchestration

  • Sequential graph pipelines
  • Parallel graph pipelines
from langchain_dev_utils.pipeline import sequential_pipeline, parallel_pipeline

# Build sequential pipeline
graph = sequential_pipeline(
    sub_graphs=[
        make_graph("graph1"),
        make_graph("graph2"),
        make_graph("graph3"),
    ],
    state_schema=State,
)

# Build parallel pipeline
graph = parallel_pipeline(
    sub_graphs=[
        make_graph("graph1"),
        make_graph("graph2"),
        make_graph("graph3"),
    ],
    state_schema=State,
)

💬 Join the Community

  • 🐙 GitHub Repository — Browse source code, submit pull requests
  • 🐞 Issue Tracker — Report bugs or suggest improvements
  • 💡 We welcome all forms of contribution — whether it's code, documentation, or usage examples. Let's build a more powerful and practical LangChain development ecosystem together!

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_dev_utils-1.0.2.tar.gz (163.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_dev_utils-1.0.2-py3-none-any.whl (35.9 kB view details)

Uploaded Python 3

File details

Details for the file langchain_dev_utils-1.0.2.tar.gz.

File metadata

  • Download URL: langchain_dev_utils-1.0.2.tar.gz
  • Upload date:
  • Size: 163.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.3

File hashes

Hashes for langchain_dev_utils-1.0.2.tar.gz
Algorithm Hash digest
SHA256 e2ee5f8e5949f59aeb9e10961e90f9b1a8b85c8766928b4c22727d9d902f6c12
MD5 ab345913a05b4801c6685f1e5058e953
BLAKE2b-256 4397bf7dfc8b7fb7325351e8244178105012cdcb494c01250bfa64314b823da8

See more details on using hashes here.

File details

Details for the file langchain_dev_utils-1.0.2-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_dev_utils-1.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 7e0021720586a7ebd78968c7b9742b709300d5571b67fb367c456c4f017ae430
MD5 c2d46b3f4f8516d1b135130a7368677d
BLAKE2b-256 8a3f81ea7d267d167db8ba0fc8ab5e05007b03d75709364356d2c32739e6db96

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page