A practical utility library for LangChain and LangGraph development
Project description
langchain-dev-utils
A practical enhancement utility library for LangChain / LangGraph developers, empowering the construction of complex and maintainable large language model applications.
📚 Documentation
🚀 Installation
pip install -U langchain-dev-utils
# For all features of this library:
pip install -U langchain-dev-utils[standard]
📦 Core Features
1. Model Management
- Supports registering any chat model or embedding model provider
- Provides unified interfaces
load_chat_model()/load_embeddings()to simplify model loading - Fully compatible with LangChain's official
init_chat_model/init_embeddings, enabling seamless extension
from langchain_dev_utils import register_model_provider, load_chat_model
from langchain_qwq import ChatQwen
register_model_provider("dashscope", ChatQwen)
register_model_provider("openrouter", "openai-compatible", base_url="https://openrouter.ai/api/v1")
model = load_chat_model("dashscope:qwen-flash")
print(model.invoke("Hello!"))
2. Message Processing
- Automatically merges reasoning content (e.g., from DeepSeek models) into the
contentfield - Supports streaming and asynchronous streaming responses (
stream/astream) - Utility functions include:
merge_ai_message_chunk(): merges message chunkshas_tool_calling()/parse_tool_calling(): detects and parses tool callsmessage_format(): formats messages or document lists (with numbering, separators, etc.)
from langchain_dev_utils import has_tool_calling, parse_tool_calling
response = model.invoke("What time is it now?")
if has_tool_calling(response):
tool_calls = parse_tool_calling(response)
print(tool_calls)
3. Tool Enhancement
- Easily extend existing tools with new capabilities
- Currently supports adding human-in-the-loop functionality to tools
from langchain_dev_utils import human_in_the_loop_async
from langchain_core.tools import tool
import asyncio
import datetime
@human_in_the_loop_async
@tool
async def async_get_current_time() -> str:
"""Asynchronously retrieve the current timestamp"""
await asyncio.sleep(1)
return str(datetime.datetime.now().timestamp())
4. Context Engineering
- Automatically generates essential context management tools:
create_write_plan_tool()/create_update_plan_tool()create_write_note_tool()/create_query_note_tool()/create_ls_tool()/create_update_note_tool()
- Provides corresponding State classes—no need to reimplement them
from langchain_dev_utils import (
create_write_plan_tool,
create_update_plan_tool,
create_write_note_tool,
create_ls_tool,
create_query_note_tool,
create_update_note_tool,
)
plan_tools = [create_write_plan_tool(), create_update_plan_tool()]
note_tools = [create_write_note_tool(), create_ls_tool(), create_query_note_tool(), create_update_note_tool()]
5. Graph Orchestration
- Composes multiple
StateGraphs in sequential or parallel fashion - Supports complex multi-agent workflows:
sequential_pipeline(): executes subgraphs sequentiallyparallel_pipeline(): executes subgraphs in parallel with dynamic branching (via theSendAPI)
- Allows specifying entry nodes and custom state/input/output schemas
from langchain_dev_utils import parallel_pipeline, Send
from typing import TypedDict
class State(TypedDict):
a: str
results: list
def branches_fn(state: State):
return [
Send("graph1", arg={"a": state["a"]}),
Send("graph2", arg={"a": state["a"]}),
]
graph = parallel_pipeline(
sub_graphs=[graph1, graph2, graph3],
state_schema=State,
branches_fn=branches_fn,
)
6. Prebuilt Agent
This function provides functionality similar to LangGraph's create_react_agent, but only supports string-based model parameters (loaded via load_chat_model), simplifying the model configuration process.
from langchain_core.tools import tool
from langchain_dev_utils.prebuilt import create_agent
import datetime
@tool
def get_current_time() -> str:
"""Get current timestamp"""
return str(datetime.datetime.now().timestamp())
# Only supports string format model identifiers
agent = create_agent(model="dashscope:qwen-flash", tools=[get_current_time])
response = await agent.ainvoke(
{"messages": [{"role": "user", "content": "What time is it now?"}]}
)
💬 Join the Community
- 🐙 GitHub Repository — Browse source code, submit pull requests
- 🐞 Issue Tracker — Report bugs or suggest improvements
- 💡 We welcome contributions — whether it’s code, documentation, or usage examples. Help us build a stronger, more powerful ecosystem of practical langchain development tools!
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_dev_utils-0.1.24.tar.gz.
File metadata
- Download URL: langchain_dev_utils-0.1.24.tar.gz
- Upload date:
- Size: 77.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e974a4a76205076d107fdb4d4f5371ba265edfbd58757ecd695326b4a8bdda3b
|
|
| MD5 |
6fc72579f09e8df5f34f09b7aed2a01b
|
|
| BLAKE2b-256 |
3e49531a98d88637d484b2703617712156f46a5da4ef51d74795b529fa6b3d25
|
File details
Details for the file langchain_dev_utils-0.1.24-py3-none-any.whl.
File metadata
- Download URL: langchain_dev_utils-0.1.24-py3-none-any.whl
- Upload date:
- Size: 27.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4dc249b2fe04868fc032614075585c857bb651800c285e3c48bf69e6404e54c0
|
|
| MD5 |
3af7d56270a04a72aca882cd49b1ea2b
|
|
| BLAKE2b-256 |
e25f79904900e9c31320c1575767812f02beb6edb1704d0a7d2649fab0390640
|