Skip to main content

An integration package connecting sarvam-AI and LangChain

Project description

langchain-sarvam

Integration package connecting Sarvam AI chat completions with LangChain.

Installation

with uv inside the package:

uv pip install langchain-sarvam

Setup

# Set the SARVAM API key
sarvam_Api_key = os.getenv("SARVAM_API_KEY")

Usage

Basic Usage

from langchain_sarvam import ChatSarvam

llm = ChatSarvam(model="sarvam-m", temperature=0.2, max_tokens=128)
resp = llm.invoke([("system", "You are helpful"), ("human", "Hello!")])
print(resp.content)

Language-Specific Usage

from langchain_sarvam import ChatSarvam

llm = ChatSarvam(
    model="sarvam-m",
    temperature=0.7,
    sarvam_api_key=os.getenv("SARVAM_API_KEY")
)

response = llm.invoke([
    ("system", "talk in Hindi"),
    ("human", "what is color of sky?"),
])
print(response.content)  # Output: आसमान का रंग नीला होता है...

Advanced Content Generation

from langchain_sarvam import ChatSarvam

llm = ChatSarvam(model="sarvam-m")

# Generate blog post outline
response = llm.invoke("create the outline for the blog post outline for blog topic - AI engineering.")
print(response.content)

Batch Processing

from langchain_sarvam import ChatSarvam
from langchain_core.messages import HumanMessage

chat = ChatSarvam(model="sarvam-m")

# Batch processing - use list of message lists
messages = [
    [HumanMessage(content="Tell me a joke")],
    [HumanMessage(content="What's the weather like?")]
]

responses = chat.batch(messages)
for response in responses:
    print(response.content)

Using generate() Method

from langchain_sarvam import ChatSarvam
from langchain_core.messages import HumanMessage

chat = ChatSarvam(model="sarvam-m")

# generate() expects a list of message lists
inputs = [
    [HumanMessage(content="Tell me a joke with emojis only")],
    [HumanMessage(content="What's the weather like?")]
]

result = chat.generate(inputs)
for generation_list in result.generations:
    # generation_list is a list of ChatGeneration objects
    for generation in generation_list:
        print(generation.message.content)

Streaming

for chunk in ChatSarvam(model="sarvam-m", streaming=True).stream("Tell me a joke"):
    print(chunk.text, end="")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_sarvam-0.1.1.tar.gz (85.6 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_sarvam-0.1.1-py3-none-any.whl (7.6 kB view details)

Uploaded Python 3

File details

Details for the file langchain_sarvam-0.1.1.tar.gz.

File metadata

  • Download URL: langchain_sarvam-0.1.1.tar.gz
  • Upload date:
  • Size: 85.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.7

File hashes

Hashes for langchain_sarvam-0.1.1.tar.gz
Algorithm Hash digest
SHA256 a6075515bd50491aa847e984d6af4b2a49fe6a45426b0b72e626e230cdf1af17
MD5 0e7736ed9740f6cb7eff1e681974abbe
BLAKE2b-256 cced92ac1cfc30a3b52927b9010dbba8e87fe90b58dfc8d21c2283e56e382605

See more details on using hashes here.

File details

Details for the file langchain_sarvam-0.1.1-py3-none-any.whl.

File metadata

File hashes

Hashes for langchain_sarvam-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 19e9853d7e81efb6b55624f315ebc384ebc10c0f2f6b1bfee4893d4361af6869
MD5 976b3ed2d42893b3dfa65ee3ed765885
BLAKE2b-256 c383679f3d183117090a81eee5080faf5e99022b6f972eb1c92afff5f747a392

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page