Freeplay integration for LangGraph and LangChain
Project description
Freeplay LangGraph Integration
Freeplay integration for LangGraph and LangChain, providing observability and prompt management for your AI applications.
Installation
Requirements: Python 3.10 or higher
pip install freeplay-langgraph
Features
- 🔍 Automatic Observability: OpenTelemetry instrumentation for LangChain and LangGraph applications
- 📝 Prompt Management: Call Freeplay-hosted prompts with version control and environment management
- 🤖 Auto-Model Instantiation: Automatically create LangChain models based on Freeplay's configuration
- 🤖 Full Agent Support: Create LangGraph agents with ReAct loops, tool calling, and state management
- ⚡ Complete Async Support: All methods support async/await (ainvoke, astream, abatch, etc.)
- 💬 Conversation History: Native support for multi-turn conversations with LangGraph MessagesState
- 🛠️ Tool Support: Seamless integration with LangChain tools
- 🎛️ Middleware: Support for custom middleware to extend agent behavior
- 📊 Structured Output: ToolStrategy and ProviderStrategy for formatted responses
- 🌊 Streaming: Stream agent execution step-by-step or token-by-token (both simple and agent modes)
- 🧪 Test Execution Tracking: Track test runs and test cases for evaluation workflows
- 🎯 Multi-Provider Support: Works with OpenAI, Anthropic, Vertex AI, and more
- 🔒 Type Safety: Full generic typing support with proper IDE autocomplete
Quick Start
Configuration
Set up your environment variables:
export FREEPLAY_API_URL="https://app.freeplay.ai/api"
export FREEPLAY_API_KEY="fp-..."
export FREEPLAY_PROJECT_ID="..."
Or pass them directly when initializing:
from freeplay_langgraph import FreeplayLangGraph
freeplay = FreeplayLangGraph(
freeplay_api_url="https://app.freeplay.ai/api",
freeplay_api_key="fp-...",
project_id="...",
)
Bundled Prompts
By default, FreeplayLangGraph uses the API-based template resolver to fetch prompts from Freeplay. If you need to use bundled prompts or custom prompt resolution logic, you can provide your own template resolver:
from pathlib import Path
from freeplay.resources.prompts import FilesystemTemplateResolver
from freeplay_langgraph import FreeplayLangGraph
# Use filesystem-based prompts (e.g., bundled with your app)
freeplay = FreeplayLangGraph(
template_resolver=FilesystemTemplateResolver(Path("bundled_prompts"))
)
Usage
Creating Agents with create_agent
The recommended way to use Freeplay with LangGraph is through the create_agent method, which uses Freeplay-hosted prompts via prompt_name and provides full support for LangGraph's agent capabilities including the ReAct loop, tool calling, middleware, structured output, and streaming.
from freeplay_langgraph import FreeplayLangGraph
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"Weather in {city}: Sunny, 72°F"
freeplay = FreeplayLangGraph()
# Create agent (no variables parameter)
agent = freeplay.create_agent(
prompt_name="weather-assistant",
tools=[get_weather],
checkpointer=MemorySaver(),
environment="production"
)
# Invoke with variables in input dict
result = agent.invoke({
"messages": [HumanMessage(content="What's the weather?")],
"variables": {"location": "San Francisco", "company": "Acme Corp"}
})
# Template-only invocation (no messages key)
result = agent.invoke({
"variables": {"location": "New York", "company": "Acme Corp"}
})
print(result["messages"][-1].content)
Note: The system prompt and template messages are re-rendered on each model call using the variables from your input dict. Variables persist in checkpoint state automatically.
Streaming Agent Execution
Stream agent steps in real-time:
agent = freeplay.create_agent(
prompt_name="weather-assistant",
tools=[get_weather]
)
# Stream with variables in input dict
for chunk in agent.stream(
{
"messages": [HumanMessage(content="What's the weather?")],
"variables": {"city": "Seattle", "company": "Acme"}
},
stream_mode="values"
):
latest_message = chunk["messages"][-1]
if hasattr(latest_message, "content") and latest_message.content:
print(f"Agent: {latest_message.content}")
elif hasattr(latest_message, "tool_calls") and latest_message.tool_calls:
print(f"Calling tools: {[tc['name'] for tc in latest_message.tool_calls]}")
Custom Middleware
Add custom behavior to your agent with middleware (requires LangChain 1.0+):
from langchain.agents.middleware import AgentMiddleware
class LoggingMiddleware(AgentMiddleware):
"""Custom middleware that logs before model calls."""
def before_model(self, state, runtime):
message_count = len(state.get("messages", []))
print(f"About to call model with {message_count} messages")
return None
def after_model(self, state, runtime):
return None
def wrap_tool_call(self, request, handler):
return handler(request)
agent = freeplay.create_agent(
prompt_name="weather-assistant",
tools=[get_weather],
middleware=[LoggingMiddleware()]
)
# Invoke with variables
result = agent.invoke({
"messages": [HumanMessage("What's the weather?")],
"variables": {"city": "Boston", "company": "Acme"}
})
Structured Output
Get structured responses using ToolStrategy or ProviderStrategy:
from pydantic import BaseModel
from langchain.agents.structured_output import ToolStrategy
class WeatherReport(BaseModel):
city: str
temperature: float
conditions: str
agent = freeplay.create_agent(
prompt_name="weather-assistant",
tools=[get_weather],
response_format=ToolStrategy(WeatherReport)
)
result = agent.invoke({
"messages": [HumanMessage(content="Get weather")],
"variables": {"city": "NYC", "company": "Acme"}
})
# Access structured output
weather_report = result["structured_response"]
print(f"{weather_report.city}: {weather_report.temperature}°F, {weather_report.conditions}")
Prompt Management with Auto-Model Instantiation
For simple use cases without the full agent loop, use the invoke method:
Call a Freeplay-hosted prompt and let the SDK automatically instantiate the correct model:
from freeplay_langgraph import FreeplayLangGraph
freeplay = FreeplayLangGraph()
# Invoke a prompt - model is automatically created based on Freeplay's config
response = freeplay.invoke(
prompt_name="weather-assistant",
variables={"city": "San Francisco"},
environment="production"
)
Async Support
All methods support async/await for better performance in async applications:
# Async invocation
response = await freeplay.ainvoke(
prompt_name="weather-assistant",
variables={"city": "San Francisco"}
)
# Async streaming
async for chunk in freeplay.astream(
prompt_name="weather-assistant",
variables={"city": "San Francisco"}
):
print(chunk.content, end="", flush=True)
Streaming Simple Invocations
Stream model responses without the full agent loop:
# Synchronous streaming
for chunk in freeplay.stream(
prompt_name="weather-assistant",
variables={"city": "San Francisco"}
):
print(chunk.content, end="", flush=True)
# Async streaming
async for chunk in freeplay.astream(
prompt_name="weather-assistant",
variables={"city": "San Francisco"}
):
print(chunk.content, end="", flush=True)
Using Custom Models
You can also provide your own pre-configured model:
from langchain_openai import ChatOpenAI
from freeplay_langgraph import FreeplayLangGraph
freeplay = FreeplayLangGraph()
model = ChatOpenAI(model="gpt-4", temperature=0.7)
response = freeplay.invoke(
prompt_name="weather-assistant",
variables={"city": "New York"},
model=model
)
Conversation History (Multi-turn Chat)
Maintain conversation context with history:
from langchain_core.messages import HumanMessage, AIMessage
from freeplay_langgraph import FreeplayLangGraph
freeplay = FreeplayLangGraph()
# Build conversation history
history = [
HumanMessage(content="What's the weather in Paris?"),
AIMessage(content="It's sunny and 22°C in Paris."),
HumanMessage(content="What about in winter?")
]
response = freeplay.invoke(
prompt_name="weather-assistant",
variables={"city": "Paris"},
history=history
)
Tool Calling
Bind LangChain tools to your prompts:
from langchain_core.tools import tool
from freeplay_langgraph import FreeplayLangGraph
@tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
# Your weather API logic here
return f"Weather in {city}: Sunny, 22°C"
freeplay = FreeplayLangGraph()
response = freeplay.invoke(
prompt_name="weather-assistant",
variables={"city": "London"},
tools=[get_weather]
)
Test Execution Tracking
Track test runs for evaluation workflows by pulling test cases from Freeplay and executing them with automatic tracking.
Creating Test Runs
import os
from freeplay_langgraph import FreeplayLangGraph
from langchain_core.messages import HumanMessage
freeplay = FreeplayLangGraph()
# Create a test run from a dataset
test_run = freeplay.client.test_runs.create(
project_id=os.getenv("FREEPLAY_PROJECT_ID"),
testlist="name of the dataset",
name="name your test run",
)
print(f"Created test run: {test_run.id}")
Executing Test Cases with Simple Invocations
For simple prompt invocations, use the test tracking parameters directly:
# Execute each test case
for test_case in test_run.test_cases:
response = freeplay.invoke(
prompt_name="my-prompt",
variables=test_case.variables,
test_run_id=test_run.id,
test_case_id=test_case.id
)
print(f"Test case {test_case.id}: {response.content}")
Executing Test Cases with Agents
For LangGraph agents, pass test tracking metadata via config and use dynamic variables per test case:
from langchain_core.messages import HumanMessage
# Create agent once
agent = freeplay.create_agent(
prompt_name="my-prompt",
tools=[get_weather],
)
# Execute each test case with variables in input
for test_case in test_run.trace_test_cases:
result = agent.invoke(
{
"messages": [HumanMessage(content=test_case.input)],
"variables": test_case.variables
},
config={
"metadata": {
"freeplay.test_run_id": test_run.id,
"freeplay.test_case_id": test_case.id
}
}
)
print(f"Test case {test_case.id}: {result['messages'][-1].content}")
API Reference
create_agent()
Create a LangGraph agent with Freeplay-hosted prompt and full observability.
Parameters:
prompt_name(str): Name of the prompt in Freeplaytools(list, optional): List of tools for the agent to useenvironment(str, optional): Environment to use (default: "latest")model(BaseChatModel, optional): Pre-instantiated model (auto-created if not provided)state_schema(type, optional): Custom state schema (TypedDict)context_schema(type, optional): Context schema for runtime contextmiddleware(list, optional): List of middleware to apply (Freeplay middleware prepended automatically)response_format(optional): Structured output format (ToolStrategy or ProviderStrategy)checkpointer(BaseCheckpointSaver, optional): Checkpointer for state persistencevalidate_tools(bool, optional): Validate tools against Freeplay schema (default: True)
Returns: FreeplayAgent - A wrapper around the compiled LangGraph agent that injects Freeplay metadata
Variables in Input Dict:
Pass variables in the input dict alongside messages. The Freeplay prompt is re-rendered on each model call:
# With messages and variables
result = agent.invoke({
"messages": [HumanMessage("Question")],
"variables": {"location": "SF", "company": "Acme"}
})
# Template-only (no messages key)
result = agent.invoke({
"variables": {"location": "NYC", "company": "Acme"}
})
# Streaming
for chunk in agent.stream(
{
"messages": [...],
"variables": {...}
},
stream_mode="values"
):
print(chunk)
# Batch (each input can have different variables)
results = agent.batch([
{"messages": [...], "variables": {"location": "SF"}},
{"messages": [...], "variables": {"location": "NYC"}}
])
Note: For state management methods, use unwrap() - see State Management below.
invoke() / ainvoke() (Simple Invocations)
Invoke a model with a Freeplay-hosted prompt (simple use cases without agent loop).
Parameters:
prompt_name(str): Name of the prompt in Freeplayvariables(dict): Variables to render the prompt template (re-rendered on each call)environment(str, optional): Environment to use (default: "latest")model(BaseChatModel, optional): Pre-instantiated modelhistory(list, optional): Conversation historytools(list, optional): Tools to bind to the modeltest_run_id(str, optional): Test run ID for trackingtest_case_id(str, optional): Test case ID for tracking
Returns: The model's response message
Async: Use ainvoke() with the same parameters for async execution.
stream() / astream()
Stream model responses with a Freeplay-hosted prompt (simple use cases).
Parameters: Same as invoke()
Yields: Chunks from the model's streaming response
Async: Use astream() with the same parameters for async streaming.
State Management
When using agents with checkpointers, you can access LangGraph's state management features via the unwrap() method. This is necessary because FreeplayAgent extends RunnableBindingBase (LangChain's official wrapper pattern) which provides automatic metadata injection but doesn't directly expose CompiledStateGraph-specific methods.
Core Invocation (Works Directly)
All standard invocation methods work without unwrap():
agent = freeplay.create_agent(
prompt_name="assistant",
checkpointer=MemorySaver()
)
# ✅ All of these work directly - no unwrap needed
result = agent.invoke({
"messages": [...],
"variables": {"location": "SF", "company": "Acme"}
})
stream = agent.stream({"messages": [...], "variables": {...}})
batched = agent.batch([{"messages": [...], "variables": {...}}])
graph = agent.get_graph()
State Management (Requires unwrap())
For CompiledStateGraph-specific methods, use unwrap():
Inspecting Agent State
from langgraph.checkpoint.memory import MemorySaver
agent = freeplay.create_agent(
prompt_name="assistant",
checkpointer=MemorySaver()
)
config = {"configurable": {"thread_id": "user-123"}}
# Run agent with variables in input
agent.invoke(
{
"messages": [HumanMessage(content="Hello")],
"variables": {"user_tier": "premium", "company": "Acme"}
},
config=config
)
# Inspect state via unwrap()
state = agent.unwrap().get_state(config)
print(f"Current messages: {state.values['messages']}")
print(f"Variables in state: {state.values.get('variables', {})}")
print(f"Next steps: {state.next}")
Human-in-the-Loop Workflows
agent = freeplay.create_agent(
prompt_name="booking-assistant",
tools=[book_flight],
checkpointer=MemorySaver()
)
config = {"configurable": {"thread_id": "booking-456"}}
# Agent runs and stops before booking (if configured with interrupt_before)
result = agent.invoke(
{
"messages": [HumanMessage(content="Book flight to Paris")],
"variables": {"user_tier": "premium", "company": "Acme Travel"}
},
config={**config, "interrupt_before": ["book_flight"]}
)
# Review and approve
print("Agent wants to book flight. Approve? (y/n)")
if input() == "y":
# Update state to continue
agent.unwrap().update_state(
config,
{"approval": "granted"},
as_node="human"
)
# Resume execution
result = agent.invoke(None, config=config)
Multi-Agent Systems
# For agents with nested subgraphs
coordinator_agent = freeplay.create_agent(
prompt_name="coordinator",
variables={"role": "orchestrator"}
)
# Access subgraph information
subgraphs = coordinator_agent.unwrap().get_subgraphs(recurse=True)
print(f"Available sub-agents: {list(subgraphs.keys())}")
State History
# View execution history
config = {"configurable": {"thread_id": "thread-123"}}
for state in agent.unwrap().get_state_history(config, limit=5):
print(f"Checkpoint: {state.config['configurable']['checkpoint_id']}")
print(f"Messages: {len(state.values['messages'])}")
Methods Requiring unwrap()
State Access:
get_state(config)/aget_state(config)- Get current state snapshotget_state_history(config)/aget_state_history(config)- View history
State Modification:
update_state(config, values)/aupdate_state(config, values)- Manual state updatesbulk_update_state(config, updates)/abulk_update_state(config, updates)- Batch updates
Advanced Features:
get_subgraphs()/aget_subgraphs()- Access nested agentsclear_cache()/aclear_cache()- Clear LLM response cache
Type Safety with unwrap()
For full type hints when using state methods:
from typing import cast
from langgraph.graph.state import CompiledStateGraph
agent = freeplay.create_agent(...)
# Option 1: Direct unwrap (works at runtime)
state = agent.unwrap().get_state(config)
# Option 2: Cast for full type hints
compiled = cast(CompiledStateGraph, agent.unwrap())
state = compiled.get_state(config) # ✅ Full IDE autocomplete
Observability
The SDK automatically instruments your LangChain and LangGraph applications with OpenTelemetry. All traces are sent to Freeplay with the following metadata:
- Input variables
- Prompt template version ID
- Environment name
- Test run and test case IDs (if provided)
All metadata is injected automatically without requiring extra configuration or manual instrumentation.
Architecture
The library uses LangChain's official RunnableBindingBase pattern to inject Freeplay metadata into all agent invocations. This provides:
- LangChain-Idiomatic: Uses the same pattern as
.bind(),.with_config(),.with_retry()throughout LangChain - Automatic Coverage: ALL Runnable methods work automatically (invoke, ainvoke, stream, astream, batch, abatch, astream_events, transform, atransform, etc.)
- Type Safety: Generic typing with proper IDE autocomplete for invocation methods
- No Config Mutation: User configurations are never modified
- Future-Proof: New LangChain methods automatically supported via inheritance
- State Management via unwrap(): Access to CompiledStateGraph-specific methods for checkpointing and state operations
Key Points:
FreeplayAgentextendsRunnableBindingBaseand usesconfig_factoriesfor metadata injection- Client methods (
invoke,stream, etc.) use.with_config()to bind metadata (LangChain's official pattern) - Both approaches follow LangChain's patterns used throughout the ecosystem
Provider Support
The SDK supports automatic model instantiation for the following providers:
- OpenAI: Requires
langchain-openaipackage - Anthropic: Requires
langchain-anthropicpackage - Vertex AI: Requires
langchain-google-vertexaipackage
Install the required provider package:
pip install langchain-openai
# or
pip install langchain-anthropic
# or
pip install langchain-google-vertexai
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file freeplay_langgraph-0.5.0.tar.gz.
File metadata
- Download URL: freeplay_langgraph-0.5.0.tar.gz
- Upload date:
- Size: 242.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
482fc59771f79b8a8dc6ee4d5820b401d02cffb91fd56cffb9b8bc7a6dbe659d
|
|
| MD5 |
da0c2e60c0d87b1e2c2eb352a685224d
|
|
| BLAKE2b-256 |
5ce30df0c06c7bf66007372083ccd0f870549881c6b347dab181142486ad483b
|
File details
Details for the file freeplay_langgraph-0.5.0-py3-none-any.whl.
File metadata
- Download URL: freeplay_langgraph-0.5.0-py3-none-any.whl
- Upload date:
- Size: 22.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: uv/0.10.2 {"installer":{"name":"uv","version":"0.10.2","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"Ubuntu","version":"24.04","id":"noble","libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":true}
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
3a0da1fd7f0b5063430f5d715a36b29ac88c231d8024294b7ff7e70f0668a2cd
|
|
| MD5 |
95e1e79ac270c4f85934995663a044b8
|
|
| BLAKE2b-256 |
013307b86df7a9237d06c023feff0270987f3c88ef2758a3ae162d3cf6e69b74
|