A practical utility library for LangChain and LangGraph development
Project description
LangChain Dev Utils
This toolkit is designed to provide encapsulated utility functions for developers building applications with large language models using LangChain and LangGraph, helping developers work more efficiently.
Installation and Usage
- Using pip
pip install -U langchain-dev-utils
- Using poetry
poetry add langchain-dev-utils
- Using uv
uv add langchain-dev-utils
Functional Modules
Currently divided into the following three main modules:
1. Instantiating Model Objects
While the official init_chat_model and init_embeddings functions are convenient to use, they support a relatively limited number of model providers. To address this, we provide register_model_provider and register_embeddings_provider functions. Through a unified registration and loading mechanism, developers can flexibly register any model provider, enabling broader model support. At the same time, load_chat_model and load_embeddings maintain the same simplicity in usage as the official functions.
(1) ChatModel Class
Core Functions
register_model_provider: Register a model providerload_chat_model: Load a chat model
Parameters for register_model_provider
provider_name: Provider name; must be a custom namechat_model: Either a ChatModel class or a string. If it's a string, it must be a provider supported by the officialinit_chat_model(e.g.,openai,anthropic). In this case, theinit_chat_modelfunction will be called.base_url: Optional base URL. Recommended whenchat_modelis a string.
Parameters for load_chat_model
-
model: Model name, in the formatmodel_nameorprovider_name:model_name -
model_provider: Optional model provider name. If not provided, the provider name must be included in themodelparameter. -
kwargs: Optional additional model parameters, such astemperature,api_key,stop, etc.
These three parameters are consistent with those of the officialinit_chat_modelfunction. -
Note: Currently, passing
configurable_fieldsandconfig_prefixparameters is not supported.
Usage Example
from langchain_dev_utils import register_model_provider, load_chat_model
from langchain_qwq import ChatQwen
from dotenv import load_dotenv
load_dotenv()
# Register custom model providers
register_model_provider("dashscope", ChatQwen)
register_model_provider("openrouter", "openai", base_url="https://openrouter.ai/api/v1")
# Load models
model = load_chat_model(model="dashscope:qwen-flash")
print(model.invoke("Hello"))
model = load_chat_model(model="openrouter:moonshotai/kimi-k2-0905")
print(model.invoke("Hello"))
Important: Since the underlying implementation uses a global dictionary, all model providers must be registered at application startup. Modifications during runtime should be avoided to prevent multi-thread concurrency synchronization issues.
Recommendation: We suggest placing the register_model_provider calls in your application's __init__.py file.
For example, if you have the following LangGraph project structure:
langgraph-project/
├── src
│ ├── __init__.py
│ └── graphs
│ ├── __init__.py # Call register_model_provider here
│ ├── graph1
│ └── graph2
(2) Embeddings Class
Core Functions
register_embeddings_provider: Register an embeddings model providerload_embeddings: Load an embeddings model
Parameters for register_embeddings_provider
provider_name: Provider name; must be a custom nameembeddings_model: Either an Embeddings class or a string. If it's a string, it must be a provider supported by the officialinit_embeddings(e.g.,openai,anthropic). In this case, theinit_embeddingsfunction will be called.base_url: Optional base URL. Recommended whenembeddings_modelis a string.
Parameters for load_embeddings
model: Model name, in the formatmodel_nameorprovider_name:model_nameprovider: Optional model provider name. If not provided, the provider name must be included in themodelparameter.kwargs: Optional additional model parameters, such aschunk_size,api_key,dimensions, etc.
These three parameters are consistent with those of the officialinit_embeddingsfunction.
Usage Example
from langchain_dev_utils import register_embeddings_provider, load_embeddings
register_embeddings_provider(
"dashscope", "openai", base_url="https://dashscope.aliyuncs.com/compatible-mode/v1"
)
embeddings = load_embeddings("dashscope:text-embedding-v4")
print(embeddings.embed_query("hello world"))
Important: Similarly, since the underlying implementation uses a global dictionary, all embedding model providers must be registered at application startup, and no modifications should be made afterward to avoid multi-thread concurrency issues.
As before, we recommend placing register_embeddings_provider in your application's __init__.py file. Refer to the previous section on registering model providers for details.
Message Class Processing
(1) Merging Reasoning Content
Provides functionality to merge the reasoning_content returned by reasoning models into the content field of AI messages.
Core Functions
convert_reasoning_content_for_ai_message: Merge reasoning_content from AIMessage into contentconvert_reasoning_content_for_chunk_iterator: Merge reasoning_content for message chunk iterators in streaming responsesaconvert_reasoning_content_for_chunk_iterator: Asynchronous version ofconvert_reasoning_content_for_chunk_iteratorfor async streaming
Parameters
model_response: The AI message response from the modelthink_tag: A tuple containing the start and end tags for reasoning content (e.g.,("<think>", "</think>"))
Usage Example
# Synchronous processing of reasoning content
from typing import cast
from langchain_dev_utils import convert_reasoning_content_for_ai_message
from langchain_core.messages import AIMessage
# Streaming processing of reasoning content
from langchain_dev_utils import convert_reasoning_content_for_chunk_iterator
response = model.invoke("Hello")
converted_response = convert_reasoning_content_for_ai_message(
cast(AIMessage, response), think_tag=("<!--THINK-->", "<!--/THINK-->")
)
print(converted_response.content)
for chunk in convert_reasoning_content_for_chunk_iterator(
model.stream("Hello"), think_tag=("<!--THINK-->", "<!--/THINK-->")
):
print(chunk.content, end="", flush=True)
(2) Merging AI Message Chunks
Provides utility functions to merge AI message chunks, combining multiple AI message chunks into a single AI message.
Core Function
merge_ai_message_chunk: Merge AI message chunks
Parameters
chunks: List of AI message chunks
Usage Example
from langchain_dev_utils import merge_ai_message_chunk
chunks = []
for chunk in model.stream("Hello"):
chunks.append(chunk)
merged_message = merge_ai_message_chunk(chunks)
print(merged_message)
(3) Detecting if Message Contains Tool Calls
Provides a simple function to detect whether a message contains tool calls.
Core Function
has_tool_calling: Check if a message contains tool calls
Parameters
message: An AIMessage object
Usage Example
import datetime
from langchain_core.tools import tool
from langchain_dev_utils import has_tool_calling
from langchain_core.messages import AIMessage
from typing import cast
@tool
def get_current_time() -> str:
"""Get the current timestamp"""
return str(datetime.datetime.now().timestamp())
response = model.bind_tools([get_current_time]).invoke("What is the current time?")
print(has_tool_calling(cast(AIMessage, response)))
(4) Parsing Tool Call Arguments
Provides a utility function to parse tool call arguments, extracting them from the message.
Core Function
parse_tool_calling: Parse tool call arguments
Parameters
message: An AIMessage objectfirst_tool_call_only: Whether to parse only the first tool call. IfTrue, returns a single tuple; ifFalse, returns a list of tuples.
Usage Example
import datetime
from langchain_core.tools import tool
from langchain_dev_utils import has_tool_calling, parse_tool_calling
from langchain_core.messages import AIMessage
from typing import cast
@tool
def get_current_time() -> str:
"""Get the current timestamp"""
return str(datetime.datetime.now().timestamp())
response = model.bind_tools([get_current_time]).invoke("What is the current time?")
if has_tool_calling(cast(AIMessage, response)):
name, args = parse_tool_calling(
cast(AIMessage, response), first_tool_call_only=True
)
print(name, args)
(5) Formatting Messages
Formats a list composed of Documents, Messages, or strings into a single string.
Core Function
message_format: Format messages
Parameters
inputs: A list containing any of the following types:langchain_core.messages: HumanMessage, AIMessage, SystemMessage, ToolMessagelangchain_core.documents.Documentstr
separator: String used to join content. Default is"-".with_num: IfTrue, adds a numbered prefix to each item (e.g.,"1. Hello"). Default isFalse.
Usage Example
from langchain_dev_utils import message_format
from langchain_core.documents import Document
messages = [
Document(page_content="Document 1"),
Document(page_content="Document 2"),
Document(page_content="Document 3"),
Document(page_content="Document 4"),
]
formatted_messages = message_format(messages, separator="\n", with_num=True)
print(formatted_messages)
3. Tool Enhancement
(1) Adding Interrupts to Tool Calls
Provides utility functions to add human-in-the-loop review support to tool calls, enabling human review during tool execution.
Core Functions
human_in_the_loop: Add human-in-the-loop review to tool callshuman_in_the_loop_async: Asynchronous version ofhuman_in_the_loop
Parameters
func: The function to be decorated. Do not pass this parameter directly.interrupt_config: Configuration for human interruption.
Usage Example
from langchain_dev_utils import human_in_the_loop
from langchain_core.tools import tool
import datetime
@human_in_the_loop
@tool # Can also be used without @tool
def get_current_time() -> str:
"""Get the current timestamp"""
return str(datetime.datetime.now().timestamp())
Testing
All utility functions in this project have been tested. You can also clone the repository to run the tests:
git clone https://github.com/TBice123123/langchain-dev-utils.git
cd langchain-dev-utils
uv sync --group test
uv run pytest .
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_dev_utils-0.1.2.tar.gz.
File metadata
- Download URL: langchain_dev_utils-0.1.2.tar.gz
- Upload date:
- Size: 70.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
44032501ab6816b2c80d9cf0159abd97c7d169cfb83897b62213083829593970
|
|
| MD5 |
283d546c373ad4a643e150edd30b3e92
|
|
| BLAKE2b-256 |
902d48c841a925a08969961ca4ba1f828770314a394e3ab268eb7675c1b4562d
|
File details
Details for the file langchain_dev_utils-0.1.2-py3-none-any.whl.
File metadata
- Download URL: langchain_dev_utils-0.1.2-py3-none-any.whl
- Upload date:
- Size: 13.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
dc64cbd21e87eb49c9487e392b9c378e3446b2d5178a5176fd9d56fc2bdc8306
|
|
| MD5 |
69278f85726ecbb825f1fc87b977981e
|
|
| BLAKE2b-256 |
668f33429e92394e40da86d453bbb7947bf8a087b21ddddcc04723d0cd292542
|