Skip to main content

LangChain integration for Nebius AI Studio

Project description

LangChain Nebius Integration

This package provides LangChain integration for Nebius AI Studio, enabling seamless use of Nebius AI Studio's chat and embedding models within LangChain.

Installation

Install the package using pip:

pip install langchain-nebius

Usage

Chat Models

from langchain_nebius import ChatNebius

chat = ChatNebius(api_key="your-api-key")
response = chat.invoke(
    [{"role": "user", "content": "What is 1 + 1?"}]
)
print(response.content)

Embeddings

from langchain_nebius import NebiusEmbeddings

embeddings = NebiusEmbeddings(api_key="your-api-key")
document_embeddings = embeddings.embed_documents(texts=["Hello, world!"])
query_embedding = embeddings.embed_query(text="Hello")

Retrievers

from langchain_core.documents import Document
from langchain_nebius import NebiusEmbeddings, NebiusRetriever

# Create embeddings
embeddings = NebiusEmbeddings(api_key="your-api-key")

# Create documents
docs = [
    Document(page_content="Paris is the capital of France"),
    Document(page_content="Berlin is the capital of Germany"),
    # Add more documents as needed
]

# Create retriever
retriever = NebiusRetriever(
    embeddings=embeddings,
    docs=docs,
    k=3  # Number of documents to return
)

# Retrieve relevant documents
query = "What is the capital of France?"
results = retriever.invoke(query)
for doc in results:
    print(doc.page_content)

Tools

The package provides tools that can be used with LangChain agents:

Using NebiusRetrievalTool (Class-based Tool)

from langchain_core.documents import Document
from langchain_nebius import NebiusEmbeddings, NebiusRetriever, NebiusRetrievalTool

# Prepare your documents
docs = [
    Document(page_content="Paris is the capital of France"),
    Document(page_content="Berlin is the capital of Germany"),
    Document(page_content="Rome is the capital of Italy"),
]

# Create embeddings and retriever
embeddings = NebiusEmbeddings(api_key="your-api-key")
retriever = NebiusRetriever(embeddings=embeddings, docs=docs)

# Create the tool
tool = NebiusRetrievalTool(
    retriever=retriever,
    name="nebius_search",
    description="Search for information in the document collection"
)

# Use the tool
result = tool.invoke({"query": "What is the capital of France?", "k": 1})
print(result)

Using nebius_search (Decorator-based Tool)

from langchain_core.documents import Document
from langchain_nebius import NebiusEmbeddings, NebiusRetriever, nebius_search

# Prepare your documents
docs = [
    Document(page_content="Paris is the capital of France"),
    Document(page_content="Berlin is the capital of Germany"),
    Document(page_content="Rome is the capital of Italy"),
]

# Create embeddings and retriever
embeddings = NebiusEmbeddings(api_key="your-api-key")
retriever = NebiusRetriever(embeddings=embeddings, docs=docs)

# Use the tool
result = nebius_search.invoke({
    "query": "What is the capital of France?",
    "retriever": retriever,
    "k": 1
})
print(result)

Building a RAG Application

from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_nebius import ChatNebius, NebiusEmbeddings, NebiusRetriever

# Create components
embeddings = NebiusEmbeddings()
retriever = NebiusRetriever(embeddings=embeddings, docs=documents)
llm = ChatNebius(model="meta-llama/Llama-3.3-70B-Instruct-fast")

# Create prompt
prompt = ChatPromptTemplate.from_template("""
Answer the question based only on the following context:

Context:
{context}

Question: {question}
""")

# Format documents function
def format_docs(docs):
    return "\n\n".join(doc.page_content for doc in docs)

# Create RAG chain
rag_chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)

# Run the chain
answer = rag_chain.invoke("What is the capital of France?")
print(answer)

Using Tools with an Agent

from langchain.agents import create_openai_functions_agent
from langchain.agents import AgentExecutor
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_nebius import NebiusEmbeddings, NebiusRetriever, NebiusRetrievalTool

# Create documents and retriever
docs = [
    Document(page_content="Paris is the capital of France"),
    Document(page_content="Berlin is the capital of Germany"),
    Document(page_content="Rome is the capital of Italy"),
]
embeddings = NebiusEmbeddings()
retriever = NebiusRetriever(embeddings=embeddings, docs=docs)

# Create the retrieval tool
retrieval_tool = NebiusRetrievalTool(
    retriever=retriever,
    name="document_search",
    description="Search for information in the document collection"
)

# Create an LLM (using OpenAI as an example)
llm = ChatOpenAI(model="gpt-3.5-turbo")

# Create the system prompt
system_prompt = """You are an assistant that answers questions based on the available documents.
Use the document_search tool to find relevant information before answering."""

prompt = ChatPromptTemplate.from_messages([
    ("system", system_prompt),
    ("user", "{input}")
])

# Create the agent
tools = [retrieval_tool]
agent = create_openai_functions_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# Run the agent
response = agent_executor.invoke({"input": "What is the capital of France?"})
print(response["output"])

For more examples, see the examples directory.

Documentation

For more details, refer to the Nebius AI Studio API Documentation.

License

This project is licensed under the MIT License. See the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_nebius-0.1.2.tar.gz (12.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_nebius-0.1.2-py3-none-any.whl (14.6 kB view details)

Uploaded Python 3

File details

Details for the file langchain_nebius-0.1.2.tar.gz.

File metadata

  • Download URL: langchain_nebius-0.1.2.tar.gz
  • Upload date:
  • Size: 12.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.13.3 Darwin/24.3.0

File hashes

Hashes for langchain_nebius-0.1.2.tar.gz
Algorithm Hash digest
SHA256 d48598747caa732a2351d778d44ed29f144e94dabf86a03e7c37ef25c90b9390
MD5 bda53ae9a29f160242d1b2c3bb3dd291
BLAKE2b-256 ea41b6b94b98ed84eaa36fafc93262e9457994ffa5bc435abfab07a08b99047e

See more details on using hashes here.

File details

Details for the file langchain_nebius-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: langchain_nebius-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 14.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.13.3 Darwin/24.3.0

File hashes

Hashes for langchain_nebius-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 b38a9eb3bb8e0e68598a8ed51375de7ad610d2d9345940e8ce97e9206779669f
MD5 a647b1fad51bf895369ccc0931c66d16
BLAKE2b-256 9a7c047a9c8f3dd87f32da66da5b9ddabe7c0df7f70214f47a2e007b8f231fcc

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page