Skip to main content

An integration package connecting GigaChat and LangChain

Project description

PyPI Python CI License Downloads

langchain-gigachat

LangChain integration for GigaChat — a large language model.

This library is part of GigaChain and wraps the GigaChat Python SDK with LangChain-compatible interfaces.

Table of Contents

Features

  • Chat completions — synchronous and asynchronous, with streaming
  • Embeddings — text vectorization via GigaChatEmbeddings
  • Tool calling — standard LangChain @tool with GigaChat metadata in extras
  • Structured output — Pydantic models and JSON mode
  • Reasoning modelsreasoning_effort for thinking models
  • Attachments — images, audio, and documents via the Files API
  • File operations — upload, list, retrieve, and delete files
  • Configurable retry — exponential backoff via the underlying SDK
  • Environment-based configuration — all parameters configurable via GIGACHAT_ env vars
  • Fully typed — Pydantic V2 models with py.typed marker

Installation

pip install -U langchain-gigachat

Requirements: Python 3.10+

Note: In production, keep TLS verification enabled (default). See Authentication for certificate setup.

Authentication

Set environment variables and let the SDK pick them up:

export GIGACHAT_CREDENTIALS="your-authorization-key"
export GIGACHAT_SCOPE="GIGACHAT_API_PERS"  # GIGACHAT_API_B2B or GIGACHAT_API_CORP for enterprise

After this, GigaChat() works without any arguments in code.

If your environment requires a specific TLS certificate:

export GIGACHAT_CA_BUNDLE_FILE="/path/to/certs.pem"

Warning: Disabling TLS verification (verify_ssl_certs=False) is for local development only and is not recommended for production.

For detailed instructions on obtaining credentials and certificates, see the GigaChat SDK and API docs.

Usage Examples

The examples below assume authentication is configured via environment variables. See Authentication.

Chat

from langchain_gigachat import GigaChat

llm = GigaChat(credentials="your-authorization-key")

msg = llm.invoke("Hello, GigaChat!")
print(msg.content)

Streaming

Receive tokens as they are generated:

from langchain_gigachat import GigaChat

llm = GigaChat()

for chunk in llm.stream("Write a short poem about programming"):
    print(chunk.content, end="", flush=True)
print()

Note: Wrapper-side local stop handling was removed in 0.5.x. The public methods still accept stop for LangChain signature compatibility, but langchain-gigachat no longer applies stop-sequence truncation itself. See MIGRATION.md before carrying stop=... call sites forward.

Async

Use async/await for non-blocking operations:

import asyncio

from langchain_gigachat import GigaChat


async def main():
    llm = GigaChat()
    msg = await llm.ainvoke("Explain quantum computing in simple terms.")
    print(msg.content)


asyncio.run(main())

Embeddings

Generate vector representations of text:

from langchain_gigachat import GigaChatEmbeddings

emb = GigaChatEmbeddings(model="Embeddings")

vector = emb.embed_query("Привет!")
print(len(vector))

Reasoning Models

Use reasoning_effort with reasoning-capable models:

from langchain_gigachat import GigaChat

llm = GigaChat(model="GigaChat-2-Reasoning", reasoning_effort="high")

msg = llm.invoke("How many r's are in the word 'strawberry'?")
print(msg.content)
print(msg.additional_kwargs.get("reasoning_content"))  # model's chain-of-thought

Note: reasoning_content is also available during streaming — each AIMessageChunk carries it in additional_kwargs.

Tool Calling

Use the standard LangChain @tool decorator. Pass GigaChat-specific metadata via extras:

from langchain_gigachat import GigaChat
from langchain_core.tools import tool


@tool(
    extras={
        "few_shot_examples": [{"request": "weather in Tokyo", "params": {"city": "Tokyo"}}]
    }
)
def get_weather(city: str) -> str:
    """Get current weather for a city."""
    return f"{city}: sunny, 22C"


llm = GigaChat()
llm_with_tools = llm.bind_tools([get_weather], tool_choice="auto")

msg = llm_with_tools.invoke("What's the weather in Tokyo?")
print(msg.tool_calls)

Note: tool_choice="any" is not supported by the GigaChat API. Use "auto", "none", or a specific tool name. If upstream code passes "any", set allow_any_tool_choice_fallback=True to silently convert it to "auto".

Note: GigaChat API does not support parallel tool calls in a single assistant message. If AIMessage contains more than one tool_calls entry, a ValueError is raised.

Legacy bind_functions()

For legacy LangChain function-calling flows, bind_functions() is still available:

from langchain_gigachat import GigaChat


def get_weather(city: str) -> str:
    """Get current weather for a city."""
    return f"{city}: sunny, 22C"


llm = GigaChat()
llm_with_functions = llm.bind_functions(
    [get_weather],
    function_call="auto",
)

Use bind_tools() for new code. bind_functions() is kept as a compatibility layer over the provider's function_call transport and supports None, "auto", "none", or a specific function name.

Internally, the provider transport is still function-oriented. That is why ToolMessage results are serialized back as provider function messages when continuing a conversation.

Structured Output

Extract typed data from model responses:

from pydantic import BaseModel, Field

from langchain_gigachat import GigaChat


class Answer(BaseModel):
    """An answer with confidence score."""

    text: str = Field(description="Final answer")
    confidence: float = Field(ge=0, le=1, description="Confidence 0..1")


llm = GigaChat()
chain = llm.with_structured_output(Answer)

parsed = chain.invoke("What is the capital of France? Rate your confidence.")
print(parsed)

JSON mode is also available: llm.with_structured_output(Answer, method="json_mode").

Attachments

Upload a file via the Files API, then reference it in content_blocks:

from langchain_core.messages import HumanMessage

from langchain_gigachat import GigaChat

llm = GigaChat()

with open("image.png", "rb") as f:
    uploaded = llm.upload_file(("image.png", f.read()))

msg = HumanMessage(
    content_blocks=[
        {"type": "text", "text": "Describe the image."},
        {"type": "image", "file_id": uploaded.id_},
    ]
)

reply = llm.invoke([msg])
print(reply.content)

Note: Supported content_blocks types: image, audio, file. The pattern is identical for each — only the type field differs.

Note: Base64 data URLs in image_url / audio_url / document_url blocks can be auto-uploaded with auto_upload_attachments=True, but prefer explicit upload_file() in production.

File Operations

Manage files via the Files API:

from langchain_gigachat import GigaChat

llm = GigaChat()

# Upload
with open("document.pdf", "rb") as f:
    uploaded = llm.upload_file(("document.pdf", f.read()))
print(f"Uploaded: {uploaded.id_}")

# List
files = llm.list_files()
for f in files.data:
    print(f"{f.id_}: {f.filename}")

# Delete
llm.delete_file(uploaded.id_)

get_file() returns file metadata, while get_file_content() downloads file content:

metadata = llm.get_file(uploaded.id_)          # UploadedFile
content = llm.get_file_content(uploaded.id_)   # Image with base64 payload
print(metadata.filename)
print(content.content[:20])

All file methods have async variants (aget_file, aget_file_content, alist_files, adelete_file, etc.).

Configuration

All parameters can be passed to GigaChat(...) / GigaChatEmbeddings(...) directly or via environment variables with the GIGACHAT_ prefix.

Constructor Parameters

Most commonly used parameters (all are optional):

Parameter Type Default Description
model str None Model name (e.g. "GigaChat-2-Max", "GigaChat-2-Pro")
temperature float None Sampling temperature
max_tokens int None Maximum number of tokens to generate
top_p float None Nucleus sampling threshold (0.0–1.0)
repetition_penalty float None Penalty applied to repeated tokens
reasoning_effort str None Reasoning effort for reasoning models
credentials str None OAuth authorization key
access_token str None Pre-obtained JWT token (bypasses OAuth)
scope str None API scope (GIGACHAT_API_PERS / _B2B / _CORP)
base_url str None Custom API endpoint
verify_ssl_certs bool None TLS certificate verification
ca_bundle_file str None Path to CA certificate bundle
timeout float None Request timeout in seconds
max_retries int None Retry attempts for transient errors (SDK default: 0)
retry_backoff_factor float None Exponential backoff multiplier (SDK default: 0.5)
profanity_check bool None Enable profanity filtering
streaming bool False Stream results by default
auto_upload_attachments bool False Auto-upload base64 content from image_url / audio_url / document_url blocks
allow_any_tool_choice_fallback bool False Silently convert tool_choice="any" to "auto"

For the full list of parameters (auth, SSL/mTLS, retry, flags, etc.), see the GigaChat SDK README — the LangChain wrapper accepts the same constructor arguments.

Environment Variables

All parameters can be configured via environment variables with the GIGACHAT_ prefix (e.g. GIGACHAT_CREDENTIALS, GIGACHAT_MODEL, GIGACHAT_BASE_URL). See the GigaChat SDK README for the full list.

Note: Retries are handled by the underlying gigachat SDK. Don't combine them with LangChain .with_retry() — the attempts multiply:

llm = GigaChat(max_retries=3, retry_backoff_factor=0.5)  # delays: 0.5s, 1s, 2s

Error Handling

SDK exceptions propagate unchanged through the LangChain wrapper (aligned with the langchain-openai approach):

from gigachat.exceptions import AuthenticationError, RateLimitError, GigaChatException
from langchain_gigachat import GigaChat

llm = GigaChat()

try:
    llm.invoke("Hello!")
except AuthenticationError as e:
    print(f"Authentication failed: {e}")
except RateLimitError as e:
    print(f"Rate limited. Retry after {e.retry_after}s")
except GigaChatException as e:
    print(f"GigaChat error: {e}")

For the full exception hierarchy and HTTP status code mapping, see the GigaChat SDK — Error Handling.

Tracing Metadata

When the provider returns tracing headers, the wrapper preserves them in both non-streaming and streaming flows:

  • AIMessage.id / AIMessageChunk.id carries x-request-id
  • non-streaming responses keep full headers in ChatResult.llm_output["x_headers"]
  • streaming responses expose full headers on the first chunk via generation_info["x_headers"]

This makes it possible to correlate LangChain runs with provider-side logs or support requests without parsing SDK responses directly.

Related Projects

  • GigaChain — a set of solutions for developing LLM applications and multi-agent systems, with support for LangChain, LangGraph, LangChain4j, GigaChat and other LLMs
  • GigaChat Python SDK — the underlying Python SDK that powers this integration
  • GigaChat API docs

Contributing

See CONTRIBUTING.md. Development happens under libs/gigachat:

uv sync
make lint_package
make test

License

This project is licensed under the MIT License.

Copyright © 2026 GigaChain

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langchain_gigachat-0.5.0.tar.gz (24.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langchain_gigachat-0.5.0-py3-none-any.whl (27.1 kB view details)

Uploaded Python 3

File details

Details for the file langchain_gigachat-0.5.0.tar.gz.

File metadata

  • Download URL: langchain_gigachat-0.5.0.tar.gz
  • Upload date:
  • Size: 24.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for langchain_gigachat-0.5.0.tar.gz
Algorithm Hash digest
SHA256 61a611acd5e5b06ec24926dac937d5459cfd8686f0ad1b6c8bb821c67299ecc9
MD5 5126a351675dafb6260373dec0c07476
BLAKE2b-256 1463abdeed6a09d2bc0cb576abb94e684f7d0da0181d15e8e10bb0167ea2f4a7

See more details on using hashes here.

File details

Details for the file langchain_gigachat-0.5.0-py3-none-any.whl.

File metadata

  • Download URL: langchain_gigachat-0.5.0-py3-none-any.whl
  • Upload date:
  • Size: 27.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for langchain_gigachat-0.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 5fd64f00829f6c4f9dc539c828ca87bdf830c161720a87bdd7acf36022072216
MD5 e1cc427ac8ee5d0b664089160a21a998
BLAKE2b-256 fa16b89a00e00e69691b51c55a28c916188cfc40c04a69382446d6177e4d79c6

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page