Skip to main content

LangChain integration for Nebius AI Studio

Project description

LangChain Nebius Integration

This package provides LangChain integration for Nebius AI Studio, enabling seamless use of Nebius AI Studio's chat and embedding models within LangChain.

Installation

Install the package using pip:

pip install langchain-nebius

Usage

Chat Models

from langchain_nebius import ChatNebius

chat = ChatNebius(api_key="your-api-key")
response = chat.invoke(
    [{"role": "user", "content": "What is 1 + 1?"}]
)
print(response.content)

Embeddings

from langchain_nebius import NebiusEmbeddings

embeddings = NebiusEmbeddings(api_key="your-api-key")
document_embeddings = embeddings.embed_documents(texts=["Hello, world!"])
query_embedding = embeddings.embed_query(text="Hello")

Retrievers

from langchain_core.documents import Document
from langchain_nebius import NebiusEmbeddings, NebiusRetriever

# Create embeddings
embeddings = NebiusEmbeddings(api_key="your-api-key")

# Create documents
docs = [
    Document(page_content="Paris is the capital of France"),
    Document(page_content="Berlin is the capital of Germany"),
    # Add more documents as needed
]

# Create retriever
retriever = NebiusRetriever(
    embeddings=embeddings,
    docs=docs,
    k=3  # Number of documents to return
)

# Retrieve relevant documents
query = "What is the capital of France?"
results = retriever.invoke(query)
for doc in results:
    print(doc.page_content)

Tools

The package provides tools that can be used with LangChain agents:

Using NebiusRetrievalTool (Class-based Tool)

from langchain_core.documents import Document
from langchain_nebius import NebiusEmbeddings, NebiusRetriever, NebiusRetrievalTool

# Prepare your documents
docs = [
    Document(page_content="Paris is the capital of France"),
    Document(page_content="Berlin is the capital of Germany"),
    Document(page_content="Rome is the capital of Italy"),
]

# Create embeddings and retriever
embeddings = NebiusEmbeddings(api_key="your-api-key")
retriever = NebiusRetriever(embeddings=embeddings, docs=docs)

# Create the tool
tool = NebiusRetrievalTool(
    retriever=retriever,
    name="nebius_search",
    description="Search for information in the document collection"
)

# Use the tool
result = tool.invoke({"query": "What is the capital of France?", "k": 1})
print(result)

Using nebius_search (Decorator-based Tool)

from langchain_core.documents import Document
from langchain_nebius import NebiusEmbeddings, NebiusRetriever, nebius_search

# Prepare your documents
docs = [
    Document(page_content="Paris is the capital of France"),
    Document(page_content="Berlin is the capital of Germany"),
    Document(page_content="Rome is the capital of Italy"),
]

# Create embeddings and retriever
embeddings = NebiusEmbeddings(api_key="your-api-key")
retriever = NebiusRetriever(embeddings=embeddings, docs=docs)

# Use the tool
result = nebius_search.invoke({
    "query": "What is the capital of France?",
    "retriever": retriever,
    "k": 1
})
print(result)

Building a RAG Application

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_nebius import ChatNebius, NebiusEmbeddings, NebiusRetriever

# Create components
embeddings = NebiusEmbeddings()
retriever = NebiusRetriever(embeddings=embeddings, docs=documents)
llm = ChatNebius(model="meta-llama/Llama-3.3-70B-Instruct-fast")

# Create prompt
prompt = ChatPromptTemplate.from_template("""
Answer the question based only on the following context:

Context:
{context}

Question: {question}
""")

# Format documents function
def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)

# Create RAG chain
rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)

# Run the chain
answer = rag_chain.invoke("What is the capital of France?")
print(answer)

Using Tools with an Agent

from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_nebius import NebiusEmbeddings, NebiusRetriever, NebiusRetrievalTool

# Create documents and retriever
docs = [
    Document(page_content="Paris is the capital of France"),
    Document(page_content="Berlin is the capital of Germany"),
    Document(page_content="Rome is the capital of Italy"),
]
embeddings = NebiusEmbeddings()
retriever = NebiusRetriever(embeddings=embeddings, docs=docs)

# Create the retrieval tool
retrieval_tool = NebiusRetrievalTool(
    retriever=retriever,
    name="document_search",
    description="Search for information in the document collection"
)

# Create an LLM (using OpenAI as an example)
llm = ChatOpenAI(model="gpt-3.5-turbo")

# Create the system prompt
system_prompt = """You are an assistant that answers questions based on the available documents.
Use the document_search tool to find relevant information before answering."""

prompt = ChatPromptTemplate.from_messages([
    ("system", system_prompt),
    ("user", "{input}")
])

# Create the agent
tools = [retrieval_tool]
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# Run the agent
response = agent_executor.invoke({"input": "What is the capital of France?"})
print(response["output"])

For more examples, see the examples directory.

Documentation

For more details, refer to the Nebius AI Studio API Documentation.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_nebius-0.1.3.tar.gz (12.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_nebius-0.1.3-py3-none-any.whl (14.6 kB view details)

Uploaded Python 3

File details

Details for the file langchain_nebius-0.1.3.tar.gz.

File metadata

  • Download URL: langchain_nebius-0.1.3.tar.gz
  • Upload date:
  • Size: 12.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.12.11

File hashes

Hashes for langchain_nebius-0.1.3.tar.gz
Algorithm Hash digest
SHA256 12ac91b2720391e21392344a0495cb14e79ab1de448b88eaa1f71afd9b57724b
MD5 66bfd940ab7ab19435b8b1d3890e1dcc
BLAKE2b-256 503139271fbd805f0f07abbe15abdc30d4c18238bd2eb1fecc97c96b014efed5

See more details on using hashes here.

File details

Details for the file langchain_nebius-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_nebius-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 f72b938d4d7f3ad50813fa8631c442dd956817fde064ca9a1469a9a95666c10e
MD5 eacbbd00ae36420ce41ae9affc35f6d9
BLAKE2b-256 00de6db1afb247ac98f091501e719579db2b02ec51cca7b0a594d17522326ef0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page