A practical utility library for LangChain and LangGraph development
Project description
LangChain Dev Utils
This toolkit is designed to provide encapsulated utility tools for developers using LangChain and LangGraph to develop large language model applications, helping developers work more efficiently.
Installation and Usage
- Using pip
pip install -U langchain-dev-utils
- Using poetry
poetry add langchain-dev-utils
- Using uv
uv add langchain-dev-utils
Function Modules
1. Extended Model Loading Functionality
While the official init_chat_model function is very useful, it has limited support for model providers. This toolkit provides extended model loading functionality that allows registration and use of more model providers.
Core Functions
register_model_provider: Register a model providerload_chat_model: Load a chat model
register_model_provider Parameter Description
provider_name: Provider name, requires a custom namechat_model: ChatModel class or string. If it's a string, it must be a provider supported by the officialinit_chat_model(e.g.,openai,anthropic). In this case, theinit_chat_modelfunction will be calledbase_url: Optional base URL. Recommended whenchat_modelis a string
Usage Example
from langchain_dev_utils.chat_model import register_model_provider, load_chat_model
from langchain_qwq import ChatQwen
from dotenv import load_dotenv
load_dotenv()
# Register custom model providers
register_model_provider("dashscope", ChatQwen)
register_model_provider("openrouter", "openai", base_url="https://openrouter.ai/api/v1")
# Load models
model = load_chat_model(model="dashscope:qwen-flash")
print(model.invoke("Hello!"))
model = load_chat_model(model="openrouter:moonshotai/kimi-k2-0905")
print(model.invoke("Hello!"))
Note: Since the underlying implementation of the function is a global dictionary, all model providers must be registered at application startup. Modifications should not be made at runtime, otherwise multi-threading concurrency synchronization issues may occur.
2. Reasoning Content Processing Functionality
Provides utility functions for processing model reasoning content, supporting both synchronous and asynchronous operations.
Core Functions
convert_reasoning_content_for_ai_message: Convert reasoning content for a single AI messageconvert_reasoning_content_for_chunk_iterator: Convert reasoning content for streaming response message chunk iteratoraconvert_reasoning_content_for_ai_message: Asynchronously convert reasoning content for a single AI messageaconvert_reasoning_content_for_chunk_iterator: Asynchronously convert reasoning content for streaming response message chunk iterator
Usage Example
# Synchronously process reasoning content
from langchain_dev_utils.content import convert_reasoning_content_for_ai_message
response = model.invoke("Please solve this math problem")
converted_response = convert_reasoning_content_for_ai_message(response, think_tag=("", ""))
# Stream processing reasoning content
from langchain_dev_utils.content import convert_reasoning_content_for_chunk_iterator
for chunk in convert_reasoning_content_for_chunk_iterator(model.stream("Please solve this math problem"), think_tag=("", "")):
print(chunk.content, end="", flush=True)
3. Embeddings Model Loading Functionality
Provides extended embeddings model loading functionality, similar to the model loading functionality.
Core Functions
register_embeddings_provider: Register an embeddings model providerload_embeddings: Load an embeddings model
Usage Example
from langchain_dev_utils.embbedings import register_embeddings_provider, load_embeddings
# Register embeddings model provider
register_embeddings_provider("openai", "openai", base_url="https://api.openai.com/v1")
# Load embeddings model
embeddings = load_embeddings("openai:text-embedding-ada-002")
4. Tool Calling Detection Functionality
Provides a simple function to detect whether a message contains tool calls.
Core Functions
has_tool_calling: Detect whether a message contains tool calls
Usage Example
from langchain_dev_utils.has_tool_calling import has_tool_calling
if has_tool_calling(message):
# Handle tool calling logic
pass
Test
All the current tool functions in this project have been tested, and you can also clone this project for testing.
git clone https://github.com/TBice123123/langchain-dev-utils.git
cd langchain-dev-utils
uv sync --group test
uv run pytest .
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_dev_utils-0.1.0.tar.gz.
File metadata
- Download URL: langchain_dev_utils-0.1.0.tar.gz
- Upload date:
- Size: 63.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9febb2836d39b051a6e072209e2c32a77cf3b11a28c3a3680f59222ab6a844ff
|
|
| MD5 |
3b0a8ff20e18448df1ea7cf9acbbf789
|
|
| BLAKE2b-256 |
5def3c38d224133628b72002a9d947d6df33ba2a9072ef82aaea0f5a00e26857
|
File details
Details for the file langchain_dev_utils-0.1.0-py3-none-any.whl.
File metadata
- Download URL: langchain_dev_utils-0.1.0-py3-none-any.whl
- Upload date:
- Size: 7.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.6.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d076ec2b26017428d1fc25f0dc017ea574bd2fe7b4859dfbc158b5c6f7496213
|
|
| MD5 |
9214553cd3cefa0a64e0c50a2d7556a5
|
|
| BLAKE2b-256 |
55d9e06e7bbb3fe64d2e945943775555e1850c807f97029eb551bd7afd72d93e
|