Yandex GPT Support for LangChain
Project description
yandex-chain - LangChain-compatible integrations with YandexGPT and YandexGPT Embeddings
This library is community-maintained Python package that provides support for Yandex GPT LLM and Embeddings for LangChain Framework.
Currently, Yandex GPT is in preview stage, so this library may occasionally break. Please use it at your own risk!
What's Included
The library includes the following three main classes:
- YandexLLM is a class representing YandexGPT Text Generation.
- ChatYandexGPT exposes the same model in chat interface that expects messages as input.
- YandexEmbeddings represents YandexGPT Embeddings service.
Usage
You can use YandexLLM
in the following manner:
from yandex_chain import YandexLLM
LLM = YandexLLM(folder_id="...", api_key="...")
print(LLM("How are you today?"))
You can use YandexEmbeddings
to compute embedding vectors:
from yandex_chain import YandexEmbeddings
embeddings = YandexEmbeddings(...)
print(embeddings("How are you today?"))
Use ChatYandexGPT
to execute a dialog with the model:
from yandex_chain import YandexLLM
gpt = ChatYandexGPT(...)
print(gpt(
[
HumanMessage(content='Привет! Придумай 10 новых слов для приветствия.')
]))
Authentication
In order to use Yandex GPT, you need to provide one of the following authentication methods, which you can specify as parameters to YandexLLM
, ChatYandexGPT
and YandexEmbeddings
classes:
- A pair of
folder_id
andapi_key
- A pair of
folder_id
andiam_token
- A path to
config.json
file, which may in turn contain parameters listed above in a convenient JSON format.
Complete Example
A pair of LLM and Embeddings are a good combination to create problem-oriented chatbots using Retrieval-Augmented Generation (RAG). Here is a short example of this approach, inspired by this LangChain tutorial.
To begin with, we have a set of documents docs
(for simplicity, let's assume it is just a list of strings), which we store in vector storage. We can use YandexEmbeddings
to compute embedding vectors:
from yandex_chain import YandexLLM, YandexEmbeddings
from langchain.vectorstores import FAISS
embeddings = YandexEmbeddings(config="config.json")
vectorstore = FAISS.from_texts(docs, embedding=embeddings)
retriever = vectorstore.as_retriever()
We can now retrieve a set of documents relevant to a query:
query = "Which library can be used to work with Yandex GPT?"
res = retriever.get_relevant_documents(query)
Now, to provide a full-text answer to the query, we can use LLM. We will prompt the LLM, giving it retrieved documents as a context, and the input query, and ask it to answer the question. This can be done using LangChain chains:
from operator import itemgetter
from langchain.prompts import ChatPromptTemplate
from langchain.schema.output_parser import StrOutputParser
from langchain.schema.runnable import RunnablePassthrough
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
model = YandexLLM(config="config.json")
chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| model
| StrOutputParser()
)
This chain can now answer our questions:
chain.invoke(query)
Lite vs. Full Models
YandexGPT model comes in several flavours - YandexGPT Lite (current and RC), YandexGPT Pro and Summarization model. By default, YandexGPT Lite is used. If you want to use different model, please specify it in the constructor of YandexLLM
or ChatYandexGPT
language model classes:
- Pro (based on Yandex GPT 3):
model=YandexGPTModel.Pro
- Lite (based on Yandex GPT 2):
model=YandexGPTModel.Lite
- Lite RC (based on Yandex GPT 3):
model=YandexGPTModel.LiteRC
- Summarization (based on Yandex GPT 2):
model=YandexGPTModel.Summarization
In previous versions, we were using
use_lite
flag to switch between Lite and Pro models. This behavior is still supported, but is deprecated, and will be removed in the next version.
Testing
This repository contains some basic unit tests. To run them, you need to place a configuration file config.json
with your credentials into tests
folder. Use config_sample.json
as a reference. After that, please run the following at the repository root directory:
python -m unittest discover -s tests
Credits
- This library has originally been developed by Dmitri Soshnikov.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file yandex-chain-0.0.9.tar.gz
.
File metadata
- Download URL: yandex-chain-0.0.9.tar.gz
- Upload date:
- Size: 9.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 400824c3dc560c4c42ef11d2b66c61b3f94c10bec15d45ac03fdee061604bf94 |
|
MD5 | b0f8d7cc70c940c51ec35f2d3b39130c |
|
BLAKE2b-256 | 1ac9613be1d4e76f4fb28540942929b6c2cef30475738bc0aabdf408e3a14c9d |