Skip to main content

Native Google GenAI SDK integration for LangGraph with context caching, tool auto-conversion, and bidirectional message translation.

Project description

langgraph-genai-bridge

Native Google GenAI SDK integration for LangGraph with context caching, tool auto-conversion, and bidirectional message translation.

By Pierre Samson (@darw007d) and Claude Opus (Anthropic)

Why?

LangGraph is the best orchestration framework for AI agents. Google's native GenAI SDK has features (context caching, native structured output) that LangChain's wrapper doesn't expose. This bridge gives you both.

Feature LangChain Wrapper This Bridge
Context Caching Not supported Built-in (5x cost reduction)
Structured Output Via wrapper (buggy) Native response_schema
Tool Calling Wrapped Native FunctionDeclaration
Latency Higher (abstraction layer) Lower (direct SDK)
LangGraph Compatible Yes Yes

Install

pip install langgraph-genai-bridge

Quick Start

from langgraph_genai_bridge import GenAIBridge

# Initialize
bridge = GenAIBridge(api_key="your-google-api-key", model="gemini-2.5-flash")

# Register your LangChain tools
bridge.set_tools(my_langchain_tools)

# Enable context caching (saves ~80% on input tokens)
bridge.enable_caching(ttl_seconds=3600)

# Use inside a LangGraph node — returns LangChain AIMessage
def orchestrator_node(state):
    response = bridge.invoke(
        state["messages"],
        system_prompt="You are a helpful trading agent."
    )
    return {"messages": [response]}

Features

Context Caching

Google's context caching lets you pay for your system prompt once per hour instead of every API call. For an agent running 12 cycles/hour with a 2000-token system prompt, that's 24,000 tokens/hour saved.

bridge.enable_caching(ttl_seconds=3600)  # Cache for 1 hour

# First call: creates cache (normal cost)
# Subsequent calls: uses cache (near-free input tokens)
response = bridge.invoke(messages, system_prompt=my_long_prompt)

Tool Auto-Conversion

Automatically converts LangChain @tool decorated functions to Google GenAI FunctionDeclaration format. No manual schema writing needed.

from langchain_core.tools import tool

@tool
def get_stock_price(ticker: str) -> str:
    """Get the current price for a stock ticker."""
    return f"{ticker}: $150.00"

bridge.set_tools([get_stock_price])  # Auto-converts

Bidirectional Message Translation

Seamlessly converts between LangChain message types and Google GenAI Content objects:

LangChain Direction Google GenAI
SystemMessage -> Context Cache / system_instruction
HumanMessage -> Content(role="user")
AIMessage (with tool_calls) -> Content(role="model") with FunctionCall
ToolMessage -> Content with FunctionResponse
<- AIMessage(content=..., tool_calls=[...])

Graceful Fallback

If the native SDK fails, automatically falls back to your LangChain wrapper:

from langchain_google_genai import ChatGoogleGenerativeAI

# Set up fallback
langchain_llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
langchain_with_tools = langchain_llm.bind_tools(my_tools)
bridge.set_langchain_fallback(langchain_with_tools)

# If native SDK fails -> seamlessly falls back to LangChain
response = bridge.invoke(messages)

Full LangGraph Example

from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import tools_condition
from langchain_core.tools import tool
from langgraph_genai_bridge import GenAIBridge

# Define tools
@tool
def search_web(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

# Initialize bridge
bridge = GenAIBridge(api_key="...", model="gemini-2.5-flash")
bridge.set_tools([search_web])
bridge.enable_caching(ttl_seconds=3600)

# LangGraph nodes
def agent(state):
    return {"messages": [bridge.invoke(state["messages"], system_prompt="You are helpful.")]}

def tool_node(state):
    # Your existing tool execution logic
    ...

# Build graph (standard LangGraph pattern)
workflow = StateGraph(...)
workflow.add_node("agent", agent)
workflow.add_node("tools", tool_node)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", tools_condition)
workflow.add_edge("tools", "agent")
app = workflow.compile()

Cost Savings Benchmark

Measured on a trading agent with 35+ tools, 2000-token system prompt, 12 cycles/hour:

Metric LangChain Wrapper GenAI Bridge
Input tokens/hour ~120,000 ~25,000
Cost/day (Gemini Flash) ~5 EUR ~1 EUR
Latency per call ~800ms ~500ms

API Reference

GenAIBridge(api_key, model, temperature, max_output_tokens)

Main bridge class.

bridge.set_tools(langchain_tools)

Register LangChain @tool functions for native function calling.

bridge.enable_caching(ttl_seconds=3600)

Enable context caching for system prompts.

bridge.invoke(messages, system_prompt=None, max_tool_output=3000)

Call Gemini and return a LangChain AIMessage. Compatible with tools_condition.

bridge.set_langchain_fallback(langchain_llm)

Set a LangChain ChatModel as fallback.

bridge.invalidate_cache()

Force cache invalidation.

License

MIT License. Co-authored by Pierre Samson and Claude Opus (Anthropic).

Sister to the Phase 19 PyPI library family — same "small, tested, publishable" ethos: phawkes (Hawkes processes) · fisherrao (information geometry) · tailcor (tail-contagion decomposition) · diebold-yilmaz (spillover index) · hodgex (Hodge Laplacians).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langgraph_genai_bridge-0.1.3.tar.gz (11.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langgraph_genai_bridge-0.1.3-py3-none-any.whl (11.9 kB view details)

Uploaded Python 3

File details

Details for the file langgraph_genai_bridge-0.1.3.tar.gz.

File metadata

  • Download URL: langgraph_genai_bridge-0.1.3.tar.gz
  • Upload date:
  • Size: 11.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for langgraph_genai_bridge-0.1.3.tar.gz
Algorithm Hash digest
SHA256 4e46b43c2106294fa3f27231fba9925e246dbb87be06ec16fc42f8e757d1ba7b
MD5 8457026ce750b61b9245487f536f8e6c
BLAKE2b-256 4cdf4dc42e37a2c5ef865b2c4e2d8f298a9bbdac3b5d7cb99fde77c3132c95dd

See more details on using hashes here.

File details

Details for the file langgraph_genai_bridge-0.1.3-py3-none-any.whl.

File metadata

File hashes

Hashes for langgraph_genai_bridge-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 59436d92ba834b89f1cc618bc231b645acf73c8aad91cd09f81cb6f12ce2ecb1
MD5 1dc8dbf69caddea9dbea2fdee127a81f
BLAKE2b-256 65a5ab298cb9c0d24986dae7fd26ca2ee20ffa0c5b167190ce9356928b874ce1

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page