An integration package connecting GigaChat and LangChain
Project description
langchain-gigachat
LangChain integration for GigaChat — a large language model.
This library is part of GigaChain and wraps the GigaChat Python SDK with LangChain-compatible interfaces.
Table of Contents
- Features
- Installation
- Authentication
- Usage Examples
- Tool Calling
- Structured Output
- Attachments
- Configuration
- Error Handling
- Related Projects
- Contributing
- License
Features
- Chat completions — synchronous and asynchronous, with streaming
- Embeddings — text vectorization via
GigaChatEmbeddings - Tool calling — standard LangChain
@toolwith GigaChat metadata inextras - Structured output — Pydantic models and JSON mode
- Reasoning models —
reasoning_effortfor thinking models - Attachments — images, audio, and documents via the Files API
- File operations — upload, list, retrieve, and delete files
- Configurable retry — exponential backoff via the underlying SDK
- Environment-based configuration — all parameters configurable via
GIGACHAT_env vars - Fully typed — Pydantic V2 models with
py.typedmarker
Installation
pip install -U langchain-gigachat
Requirements: Python 3.10+
Note: In production, keep TLS verification enabled (default). See Authentication for certificate setup.
Authentication
Set environment variables and let the SDK pick them up:
export GIGACHAT_CREDENTIALS="your-authorization-key"
export GIGACHAT_SCOPE="GIGACHAT_API_PERS" # GIGACHAT_API_B2B or GIGACHAT_API_CORP for enterprise
After this, GigaChat() works without any arguments in code.
If your environment requires a specific TLS certificate:
export GIGACHAT_CA_BUNDLE_FILE="/path/to/certs.pem"
Warning: Disabling TLS verification (
verify_ssl_certs=False) is for local development only and is not recommended for production.
For detailed instructions on obtaining credentials and certificates, see the GigaChat SDK and API docs.
Usage Examples
The examples below assume authentication is configured via environment variables. See Authentication.
Chat
from langchain_gigachat import GigaChat
llm = GigaChat(credentials="your-authorization-key")
msg = llm.invoke("Hello, GigaChat!")
print(msg.content)
Streaming
Receive tokens as they are generated:
from langchain_gigachat import GigaChat
llm = GigaChat()
for chunk in llm.stream("Write a short poem about programming"):
print(chunk.content, end="", flush=True)
print()
Note: Wrapper-side local
stophandling was removed in0.5.x. The public methods still acceptstopfor LangChain signature compatibility, butlangchain-gigachatno longer applies stop-sequence truncation itself. SeeMIGRATION.mdbefore carryingstop=...call sites forward.
Async
Use async/await for non-blocking operations:
import asyncio
from langchain_gigachat import GigaChat
async def main():
llm = GigaChat()
msg = await llm.ainvoke("Explain quantum computing in simple terms.")
print(msg.content)
asyncio.run(main())
Embeddings
Generate vector representations of text:
from langchain_gigachat import GigaChatEmbeddings
emb = GigaChatEmbeddings(model="Embeddings")
vector = emb.embed_query("Привет!")
print(len(vector))
Reasoning Models
Use reasoning_effort with reasoning-capable models:
from langchain_gigachat import GigaChat
llm = GigaChat(model="GigaChat-2-Reasoning", reasoning_effort="high")
msg = llm.invoke("How many r's are in the word 'strawberry'?")
print(msg.content)
print(msg.additional_kwargs.get("reasoning_content")) # model's chain-of-thought
Note:
reasoning_contentis also available during streaming — eachAIMessageChunkcarries it inadditional_kwargs.
Tool Calling
Use the standard LangChain @tool decorator. Pass GigaChat-specific metadata via extras:
from langchain_gigachat import GigaChat
from langchain_core.tools import tool
@tool(
extras={
"few_shot_examples": [{"request": "weather in Tokyo", "params": {"city": "Tokyo"}}]
}
)
def get_weather(city: str) -> str:
"""Get current weather for a city."""
return f"{city}: sunny, 22C"
llm = GigaChat()
llm_with_tools = llm.bind_tools([get_weather], tool_choice="auto")
msg = llm_with_tools.invoke("What's the weather in Tokyo?")
print(msg.tool_calls)
Note:
tool_choice="any"is not supported by the GigaChat API. Use"auto","none", or a specific tool name. If upstream code passes"any", setallow_any_tool_choice_fallback=Trueto silently convert it to"auto".
Note: GigaChat API does not support parallel tool calls in a single assistant message. If
AIMessagecontains more than onetool_callsentry, aValueErroris raised.
Legacy bind_functions()
For legacy LangChain function-calling flows, bind_functions() is still available:
from langchain_gigachat import GigaChat
def get_weather(city: str) -> str:
"""Get current weather for a city."""
return f"{city}: sunny, 22C"
llm = GigaChat()
llm_with_functions = llm.bind_functions(
[get_weather],
function_call="auto",
)
Use bind_tools() for new code. bind_functions() is kept as a compatibility layer over the provider's function_call transport and supports None, "auto", "none", or a specific function name.
Internally, the provider transport is still function-oriented. That is why
ToolMessage results are serialized back as provider function messages when
continuing a conversation.
Structured Output
Extract typed data from model responses:
from pydantic import BaseModel, Field
from langchain_gigachat import GigaChat
class Answer(BaseModel):
text: str = Field(description="Final answer")
confidence: float = Field(ge=0, le=1, description="Confidence 0..1")
llm = GigaChat()
chain = llm.with_structured_output(Answer)
parsed = chain.invoke("What is the capital of France? Rate your confidence.")
print(parsed)
JSON mode is also available: llm.with_structured_output(Answer, method="json_mode").
Attachments
Upload a file via the Files API, then reference it in content_blocks:
from langchain_core.messages import HumanMessage
from langchain_gigachat import GigaChat
llm = GigaChat()
with open("image.png", "rb") as f:
uploaded = llm.upload_file(("image.png", f.read()))
msg = HumanMessage(
content_blocks=[
{"type": "text", "text": "Describe the image."},
{"type": "image", "file_id": uploaded.id_},
]
)
reply = llm.invoke([msg])
print(reply.content)
Note: Supported
content_blockstypes:image,audio,file. The pattern is identical for each — only thetypefield differs.
Note: Base64 data URLs in
image_url/audio_url/document_urlblocks can be auto-uploaded withauto_upload_attachments=True, but prefer explicitupload_file()in production.
File Operations
Manage files via the Files API:
from langchain_gigachat import GigaChat
llm = GigaChat()
# Upload
with open("document.pdf", "rb") as f:
uploaded = llm.upload_file(("document.pdf", f.read()))
print(f"Uploaded: {uploaded.id_}")
# List
files = llm.list_files()
for f in files.data:
print(f"{f.id_}: {f.filename}")
# Delete
llm.delete_file(uploaded.id_)
get_file() returns file metadata, while get_file_content() downloads file content:
metadata = llm.get_file(uploaded.id_) # UploadedFile
content = llm.get_file_content(uploaded.id_) # Image with base64 payload
print(metadata.filename)
print(content.content[:20])
All file methods have async variants (aget_file, aget_file_content, alist_files, adelete_file, etc.).
Configuration
All parameters can be passed to GigaChat(...) / GigaChatEmbeddings(...) directly or via environment variables with the GIGACHAT_ prefix.
Constructor Parameters
Most commonly used parameters (all are optional):
| Parameter | Type | Default | Description |
|---|---|---|---|
model |
str |
None |
Model name (e.g. "GigaChat-2-Max", "GigaChat-2-Pro") |
temperature |
float |
None |
Sampling temperature |
max_tokens |
int |
None |
Maximum number of tokens to generate |
top_p |
float |
None |
Nucleus sampling threshold (0.0–1.0) |
repetition_penalty |
float |
None |
Penalty applied to repeated tokens |
reasoning_effort |
str |
None |
Reasoning effort for reasoning models |
credentials |
str |
None |
OAuth authorization key |
access_token |
str |
None |
Pre-obtained JWT token (bypasses OAuth) |
scope |
str |
None |
API scope (GIGACHAT_API_PERS / _B2B / _CORP) |
base_url |
str |
None |
Custom API endpoint |
verify_ssl_certs |
bool |
None |
TLS certificate verification |
ca_bundle_file |
str |
None |
Path to CA certificate bundle |
timeout |
float |
None |
Request timeout in seconds |
max_retries |
int |
None |
Retry attempts for transient errors (SDK default: 0) |
retry_backoff_factor |
float |
None |
Exponential backoff multiplier (SDK default: 0.5) |
profanity_check |
bool |
None |
Enable profanity filtering |
streaming |
bool |
False |
Stream results by default |
auto_upload_attachments |
bool |
False |
Auto-upload base64 content from image_url / audio_url / document_url blocks |
allow_any_tool_choice_fallback |
bool |
False |
Silently convert tool_choice="any" to "auto" |
For the full list of parameters (auth, SSL/mTLS, retry, flags, etc.), see the GigaChat SDK README — the LangChain wrapper accepts the same constructor arguments.
Environment Variables
All parameters can be configured via environment variables with the GIGACHAT_ prefix (e.g. GIGACHAT_CREDENTIALS, GIGACHAT_MODEL, GIGACHAT_BASE_URL). See the GigaChat SDK README for the full list.
Note: Retries are handled by the underlying
gigachatSDK. Don't combine them with LangChain.with_retry()— the attempts multiply:llm = GigaChat(max_retries=3, retry_backoff_factor=0.5) # delays: 0.5s, 1s, 2s
Error Handling
SDK exceptions propagate unchanged through the LangChain wrapper (aligned with the langchain-openai approach):
from gigachat.exceptions import AuthenticationError, RateLimitError, GigaChatException
from langchain_gigachat import GigaChat
llm = GigaChat()
try:
llm.invoke("Hello!")
except AuthenticationError as e:
print(f"Authentication failed: {e}")
except RateLimitError as e:
print(f"Rate limited. Retry after {e.retry_after}s")
except GigaChatException as e:
print(f"GigaChat error: {e}")
For the full exception hierarchy and HTTP status code mapping, see the GigaChat SDK — Error Handling.
Tracing Metadata
When the provider returns tracing headers, the wrapper preserves them in both non-streaming and streaming flows:
AIMessage.id/AIMessageChunk.idcarriesx-request-id- non-streaming responses keep full headers in
ChatResult.llm_output["x_headers"] - streaming responses expose full headers on the first chunk via
generation_info["x_headers"]
This makes it possible to correlate LangChain runs with provider-side logs or support requests without parsing SDK responses directly.
Related Projects
- GigaChain — a set of solutions for developing LLM applications and multi-agent systems, with support for LangChain, LangGraph, LangChain4j, GigaChat and other LLMs
- GigaChat Python SDK — the underlying Python SDK that powers this integration
- GigaChat API docs
Contributing
See CONTRIBUTING.md. Development happens under libs/gigachat:
uv sync
make lint_package
make test
License
This project is licensed under the MIT License.
Copyright © 2026 GigaChain
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file langchain_gigachat-0.5.0a1.tar.gz.
File metadata
- Download URL: langchain_gigachat-0.5.0a1.tar.gz
- Upload date:
- Size: 24.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0a57d9b5a79e68dd5bd27c90925733f6f3d5f92514a2989ed5f663f78933ed7e
|
|
| MD5 |
2807c355d999194dcbca2a1afc12108c
|
|
| BLAKE2b-256 |
3c07206ebac334511c69d4333d7164c156ad6842b91e83d5e31d9a46efd69700
|
File details
Details for the file langchain_gigachat-0.5.0a1-py3-none-any.whl.
File metadata
- Download URL: langchain_gigachat-0.5.0a1-py3-none-any.whl
- Upload date:
- Size: 27.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.10.3 {"installer":{"name":"uv","version":"0.10.3","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
4634c43f883372cfba57380154ea61b0874138baf2ce74dc7223bc98f361a431
|
|
| MD5 |
6980c98580c3339236896aaec201673e
|
|
| BLAKE2b-256 |
1f9a7520f0baacdde29152829b0b7c76dc1370c03a7b5367e31737f2a16ad025
|