Skip to main content

A practical utility library for LangChain and LangGraph development

Project description

LangChain Dev Utils

中文文档

This toolkit is designed to provide encapsulated utility tools for developers using LangChain and LangGraph to develop large language model applications, helping developers work more efficiently.

Installation and Usage

  1. Using pip
pip install -U langchain-dev-utils
  1. Using poetry
poetry add langchain-dev-utils
  1. Using uv
uv add langchain-dev-utils

Function Modules

1. Extended Model Loading Functionality

While the official init_chat_model function is very useful, it has limited support for model providers. This toolkit provides extended model loading functionality that allows registration and use of more model providers.

Core Functions

  • register_model_provider: Register a model provider
  • load_chat_model: Load a chat model

register_model_provider Parameter Description

  • provider_name: Provider name, requires a custom name
  • chat_model: ChatModel class or string. If it's a string, it must be a provider supported by the official init_chat_model (e.g., openai, anthropic). In this case, the init_chat_model function will be called
  • base_url: Optional base URL. Recommended when chat_model is a string

Usage Example

from langchain_dev_utils.chat_model import register_model_provider, load_chat_model
from langchain_qwq import ChatQwen
from dotenv import load_dotenv

load_dotenv()

# Register custom model providers
register_model_provider("dashscope", ChatQwen)
register_model_provider("openrouter", "openai", base_url="https://openrouter.ai/api/v1")

# Load models
model = load_chat_model(model="dashscope:qwen-flash")
print(model.invoke("Hello!"))

model = load_chat_model(model="openrouter:moonshotai/kimi-k2-0905")
print(model.invoke("Hello!"))

Note: Since the underlying implementation of the function is a global dictionary, all model providers must be registered at application startup. Modifications should not be made at runtime, otherwise multi-threading concurrency synchronization issues may occur.

2. Reasoning Content Processing Functionality

Provides utility functions for processing model reasoning content, supporting both synchronous and asynchronous operations.

Core Functions

  • convert_reasoning_content_for_ai_message: Convert reasoning content for a single AI message
  • convert_reasoning_content_for_chunk_iterator: Convert reasoning content for streaming response message chunk iterator
  • aconvert_reasoning_content_for_ai_message: Asynchronously convert reasoning content for a single AI message
  • aconvert_reasoning_content_for_chunk_iterator: Asynchronously convert reasoning content for streaming response message chunk iterator

Usage Example

# Synchronously process reasoning content
from langchain_dev_utils.content import convert_reasoning_content_for_ai_message

response = model.invoke("Please solve this math problem")
converted_response = convert_reasoning_content_for_ai_message(response, think_tag=("", ""))

# Stream processing reasoning content
from langchain_dev_utils.content import convert_reasoning_content_for_chunk_iterator

for chunk in convert_reasoning_content_for_chunk_iterator(model.stream("Please solve this math problem"), think_tag=("", "")):
    print(chunk.content, end="", flush=True)

3. Embeddings Model Loading Functionality

Provides extended embeddings model loading functionality, similar to the model loading functionality.

Core Functions

  • register_embeddings_provider: Register an embeddings model provider
  • load_embeddings: Load an embeddings model

Usage Example

from langchain_dev_utils.embbedings import register_embeddings_provider, load_embeddings

# Register embeddings model provider
register_embeddings_provider("openai", "openai", base_url="https://api.openai.com/v1")

# Load embeddings model
embeddings = load_embeddings("openai:text-embedding-ada-002")

4. Tool Calling Detection Functionality

Provides a simple function to detect whether a message contains tool calls.

Core Functions

  • has_tool_calling: Detect whether a message contains tool calls

Usage Example

from langchain_dev_utils.has_tool_calling import has_tool_calling

if has_tool_calling(message):
    # Handle tool calling logic
    pass

Test

All the current tool functions in this project have been tested, and you can also clone this project for testing.

git clone https://github.com/TBice123123/langchain-dev-utils.git
cd langchain-dev-utils
uv sync --group test
uv run pytest .

Project details


Release history Release notifications | RSS feed

This version

0.1.1

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_dev_utils-0.1.1.tar.gz (64.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_dev_utils-0.1.1-py3-none-any.whl (8.2 kB view details)

Uploaded Python 3

File details

Details for the file langchain_dev_utils-0.1.1.tar.gz.

File metadata

  • Download URL: langchain_dev_utils-0.1.1.tar.gz
  • Upload date:
  • Size: 64.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.6.3

File hashes

Hashes for langchain_dev_utils-0.1.1.tar.gz
Algorithm Hash digest
SHA256 e1118019a2c30b8f04e1a881ec9d5713efb758de467a3018dca3a5e16b9e963f
MD5 d10cca1cf2cdba8db7458be505eda7bf
BLAKE2b-256 1f70ba06d4b5b7ac8059791bb3a31c078e29d55e042f21770f8cc98df4d272b9

See more details on using hashes here.

File details

Details for the file langchain_dev_utils-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_dev_utils-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5afcaef483a405578c541e0573d5ae1c78ff058ce702c19124682ad0b9b2a6c0
MD5 a1906647f61ab156da0ee68e54870cad
BLAKE2b-256 ff9d52d12e87616ae0d1ae327e6fcdd9cc3b197102c449d072c4e97b3d53131d

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page