Skip to main content

Native Google GenAI SDK integration for LangGraph with context caching, tool auto-conversion, and bidirectional message translation.

Project description

langgraph-genai-bridge

Native Google GenAI SDK integration for LangGraph with context caching, tool auto-conversion, and bidirectional message translation.

By Pierre Samson (@darw007d) and Claude Opus (Anthropic)

Why?

LangGraph is the best orchestration framework for AI agents. Google's native GenAI SDK has features (context caching, native structured output) that LangChain's wrapper doesn't expose. This bridge gives you both.

Feature LangChain Wrapper This Bridge
Context Caching Not supported Built-in (5x cost reduction)
Structured Output Via wrapper (buggy) Native response_schema
Tool Calling Wrapped Native FunctionDeclaration
Latency Higher (abstraction layer) Lower (direct SDK)
LangGraph Compatible Yes Yes

Install

pip install langgraph-genai-bridge

Quick Start

from langgraph_genai_bridge import GenAIBridge

# Initialize
bridge = GenAIBridge(api_key="your-google-api-key", model="gemini-2.5-flash")

# Register your LangChain tools
bridge.set_tools(my_langchain_tools)

# Enable context caching (saves ~80% on input tokens)
bridge.enable_caching(ttl_seconds=3600)

# Use inside a LangGraph node — returns LangChain AIMessage
def orchestrator_node(state):
    response = bridge.invoke(
        state["messages"],
        system_prompt="You are a helpful trading agent."
    )
    return {"messages": [response]}

Features

Context Caching

Google's context caching lets you pay for your system prompt once per hour instead of every API call. For an agent running 12 cycles/hour with a 2000-token system prompt, that's 24,000 tokens/hour saved.

bridge.enable_caching(ttl_seconds=3600)  # Cache for 1 hour

# First call: creates cache (normal cost)
# Subsequent calls: uses cache (near-free input tokens)
response = bridge.invoke(messages, system_prompt=my_long_prompt)

Tool Auto-Conversion

Automatically converts LangChain @tool decorated functions to Google GenAI FunctionDeclaration format. No manual schema writing needed.

from langchain_core.tools import tool

@tool
def get_stock_price(ticker: str) -> str:
    """Get the current price for a stock ticker."""
    return f"{ticker}: $150.00"

bridge.set_tools([get_stock_price])  # Auto-converts

Bidirectional Message Translation

Seamlessly converts between LangChain message types and Google GenAI Content objects:

LangChain Direction Google GenAI
SystemMessage -> Context Cache / system_instruction
HumanMessage -> Content(role="user")
AIMessage (with tool_calls) -> Content(role="model") with FunctionCall
ToolMessage -> Content with FunctionResponse
<- AIMessage(content=..., tool_calls=[...])

Graceful Fallback

If the native SDK fails, automatically falls back to your LangChain wrapper:

from langchain_google_genai import ChatGoogleGenerativeAI

# Set up fallback
langchain_llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
langchain_with_tools = langchain_llm.bind_tools(my_tools)
bridge.set_langchain_fallback(langchain_with_tools)

# If native SDK fails -> seamlessly falls back to LangChain
response = bridge.invoke(messages)

Full LangGraph Example

from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import tools_condition
from langchain_core.tools import tool
from langgraph_genai_bridge import GenAIBridge

# Define tools
@tool
def search_web(query: str) -> str:
    """Search the web for information."""
    return f"Results for: {query}"

# Initialize bridge
bridge = GenAIBridge(api_key="...", model="gemini-2.5-flash")
bridge.set_tools([search_web])
bridge.enable_caching(ttl_seconds=3600)

# LangGraph nodes
def agent(state):
    return {"messages": [bridge.invoke(state["messages"], system_prompt="You are helpful.")]}

def tool_node(state):
    # Your existing tool execution logic
    ...

# Build graph (standard LangGraph pattern)
workflow = StateGraph(...)
workflow.add_node("agent", agent)
workflow.add_node("tools", tool_node)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges("agent", tools_condition)
workflow.add_edge("tools", "agent")
app = workflow.compile()

Cost Savings Benchmark

Measured on a trading agent with 35+ tools, 2000-token system prompt, 12 cycles/hour:

Metric LangChain Wrapper GenAI Bridge
Input tokens/hour ~120,000 ~25,000
Cost/day (Gemini Flash) ~5 EUR ~1 EUR
Latency per call ~800ms ~500ms

API Reference

GenAIBridge(api_key, model, temperature, max_output_tokens)

Main bridge class.

bridge.set_tools(langchain_tools)

Register LangChain @tool functions for native function calling.

bridge.enable_caching(ttl_seconds=3600)

Enable context caching for system prompts.

bridge.invoke(messages, system_prompt=None, max_tool_output=3000)

Call Gemini and return a LangChain AIMessage. Compatible with tools_condition.

bridge.set_langchain_fallback(langchain_llm)

Set a LangChain ChatModel as fallback.

bridge.invalidate_cache()

Force cache invalidation.

License

MIT License. Co-authored by Pierre Samson and Claude Opus (Anthropic).

Sister to the Phase 19 PyPI library family — same "small, tested, publishable" ethos: phawkes (Hawkes processes) · fisherrao (information geometry) · tailcor (tail-contagion decomposition) · diebold-yilmaz (spillover index) · hodgex (Hodge Laplacians).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

langgraph_genai_bridge-0.1.4.tar.gz (12.4 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

langgraph_genai_bridge-0.1.4-py3-none-any.whl (12.4 kB view details)

Uploaded Python 3

File details

Details for the file langgraph_genai_bridge-0.1.4.tar.gz.

File metadata

  • Download URL: langgraph_genai_bridge-0.1.4.tar.gz
  • Upload date:
  • Size: 12.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.3

File hashes

Hashes for langgraph_genai_bridge-0.1.4.tar.gz
Algorithm Hash digest
SHA256 cac3df507e8b6f9852859e1034caa1c52ea693a831880a9058d00cdab12cb1fd
MD5 11bd6808aae9bbfb5508d368bbd69582
BLAKE2b-256 0097775bad3667b198181fc5374ab9840b1fd5250261b47b5354f98929508c76

See more details on using hashes here.

File details

Details for the file langgraph_genai_bridge-0.1.4-py3-none-any.whl.

File metadata

File hashes

Hashes for langgraph_genai_bridge-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 b57cb821abc02e446788d6b9c1bb3cead2523c1863cad027c65c31866da814a0
MD5 7c194ccdd4bded3aff87dc604961bde5
BLAKE2b-256 1b5a596d261a9eee1b6142ad915a8dc314f83f0307df94dab963f48f2afeb718

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page