Skip to main content

Custom Langchain chat models for various large model service providers

Project description

Langchain Custom Models

This repository provides a collection of custom ChatModel integrations for Langchain, enabling support for various Large Language Model (LLM) service providers. The goal is to offer a unified interface for models that are not yet officially supported by Langchain.

Currently, the following provider is supported:

  • Volcengine Ark

Features

  • Seamless Integration: Drop-in replacement for any Langchain ChatModel.
  • Volcengine Ark Support: Full support for ChatVolcEngine to interact with Volcengine's Ark API.
  • Standard Interface: Works with standard Langchain message types (SystemMessage, HumanMessage, AIMessage).

Installation

You can install the package directly from this repository:

pip install git+https://github.com/Hellozaq/langchain-custom-models.git

Additionally, ensure you have the Volcengine Ark SDK installed:

pip install volcengine-python-sdk[ark]

Usage

ChatVolcEngine

To use the ChatVolcEngine model, you need to provide your Volcengine Ark API key. The recommended approach is to use a .env file to manage your credentials securely.

1. Create a .env file

In your project's root directory, create a file named .env and add your API key:

VOLCANO_API_KEY="your-ark-api-key"

2. Load Credentials and Use the Model

Now, you can use dotenv to load the API key from the .env file into your environment.

Here is a basic example:

from dotenv import load_dotenv
from langchain_custom_models import ChatVolcEngine
from langchain_core.messages import HumanMessage, SystemMessage

# Load environment variables from .env file
load_dotenv()

# Initialize the chat model
# The API key is automatically read from the VOLCANO_API_KEY environment variable.
# Replace 'your-model-id' with the actual model ID from Volcengine Ark, e.g., 'deepseek-v3-250324'
llm = ChatVolcEngine(model="your-model-id")

# Prepare messages
messages = [
    SystemMessage(content="You are a helpful assistant."),
    HumanMessage(content="Hello, who are you?"),
]

# Get a response
response = llm.invoke(messages)

print(response.content)

Parameters

  • model (str): Required. The model ID from Volcengine Ark (e.g., deepseek-v3-250324).
  • ark_api_key (Optional[str]): Your Volcengine Ark API key. If not provided, it will be read from the VOLCANO_API_KEY environment variable.
  • max_tokens (int): The maximum number of tokens to generate. Defaults to 4096.
  • temperature (float): Controls the randomness of the output. Defaults to 0.7.
  • top_p (float): Nucleus sampling parameter. Defaults to 1.0.

Contributing

Contributions are welcome! If you would like to add support for a new LLM provider or improve existing integrations, please feel free to open a pull request.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_custom_models-0.1.2.tar.gz (5.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_custom_models-0.1.2-py3-none-any.whl (5.8 kB view details)

Uploaded Python 3

File details

Details for the file langchain_custom_models-0.1.2.tar.gz.

File metadata

  • Download URL: langchain_custom_models-0.1.2.tar.gz
  • Upload date:
  • Size: 5.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.4

File hashes

Hashes for langchain_custom_models-0.1.2.tar.gz
Algorithm Hash digest
SHA256 9afd5ccdf72c080ab628280b6caa187e01fb57545139fe590367f861bece9072
MD5 5f9853945c86b01d59970bc9983f79bd
BLAKE2b-256 9097047e616946651f05d6273631a5954558c15c1bd1a69b78f0828c771ee42f

See more details on using hashes here.

File details

Details for the file langchain_custom_models-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_custom_models-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 6b814521c7a2384270da95cad528c90d8bdedf816885c20840145aa7576daa3a
MD5 c9111ecd8f0c185d34e0a908d1c6bec4
BLAKE2b-256 16399d323fc5f5ba11214d67a20ff147db96e1b23ce0bbdd7b73fe94139f7b9c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page