Skip to main content

Custom Langchain chat models for various large model service providers

Project description

Langchain Custom Models

This repository provides a collection of custom ChatModel integrations for Langchain, enabling support for various Large Language Model (LLM) service providers. The goal is to offer a unified interface for models that are not yet officially supported by Langchain.

Currently, the following provider is supported:

  • Volcengine Ark

Features

  • Seamless Integration: Drop-in replacement for any Langchain ChatModel.
  • Volcengine Ark Support: Full support for ChatVolcEngine to interact with Volcengine's Ark API.
  • Standard Interface: Works with standard Langchain message types (SystemMessage, HumanMessage, AIMessage).

Installation

You can install the package directly from this repository:

pip install git+https://github.com/Hellozaq/langchain-custom-models.git

Additionally, ensure you have the Volcengine Ark SDK installed:

pip install volcengine-python-sdk[ark]

Usage

ChatVolcEngine

To use the ChatVolcEngine model, you need to provide your Volcengine Ark API key. The recommended approach is to use a .env file to manage your credentials securely.

1. Create a .env file

In your project's root directory, create a file named .env and add your API key:

VOLCANO_API_KEY="your-ark-api-key"

2. Load Credentials and Use the Model

Now, you can use dotenv to load the API key from the .env file into your environment.

Here is a basic example:

from dotenv import load_dotenv
from langchain_custom_models.ChatVolcEngine import ChatVolcEngine
from langchain_core.messages import HumanMessage, SystemMessage

# Load environment variables from .env file
load_dotenv()

# Initialize the chat model
# The API key is automatically read from the VOLCANO_API_KEY environment variable.
# Replace 'your-model-id' with the actual model ID from Volcengine Ark, e.g., 'deepseek-v3-250324'
llm = ChatVolcEngine(model="your-model-id")

# Prepare messages
messages = [
    SystemMessage(content="You are a helpful assistant."),
    HumanMessage(content="Hello, who are you?"),
]

# Get a response
response = llm.invoke(messages)

print(response.content)

Parameters

  • model (str): Required. The model ID from Volcengine Ark (e.g., deepseek-v3-250324).
  • ark_api_key (Optional[str]): Your Volcengine Ark API key. If not provided, it will be read from the VOLCANO_API_KEY environment variable.
  • max_tokens (int): The maximum number of tokens to generate. Defaults to 4096.
  • temperature (float): Controls the randomness of the output. Defaults to 0.7.
  • top_p (float): Nucleus sampling parameter. Defaults to 1.0.

Contributing

Contributions are welcome! If you would like to add support for a new LLM provider or improve existing integrations, please feel free to open a pull request.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_custom_models-0.1.1.tar.gz (5.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_custom_models-0.1.1-py3-none-any.whl (5.8 kB view details)

Uploaded Python 3

File details

Details for the file langchain_custom_models-0.1.1.tar.gz.

File metadata

  • Download URL: langchain_custom_models-0.1.1.tar.gz
  • Upload date:
  • Size: 5.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.10.4

File hashes

Hashes for langchain_custom_models-0.1.1.tar.gz
Algorithm Hash digest
SHA256 516715f234409c2c4dd19784c7c86d81c83e126ecedcbdb365062d6f5bed1c46
MD5 639fbb5597d9aca0eff4e749f3058a70
BLAKE2b-256 ffc29341dddf4a426a26fc60494a5f67c068da1d57b03d5049e6141691fde227

See more details on using hashes here.

File details

Details for the file langchain_custom_models-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_custom_models-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 42068f638e4af01b0d9273dd945319dab9f6e2f6d36619bb1409f29de911f4b4
MD5 9e314deb898831695b8233380a20cee8
BLAKE2b-256 87df23ca97e3a7c6914e23ad25e43bbaea4265b6bf20ac4447c774b884359892

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page