LangGraph utilities for Streamlit - StreamlitLanggraphHandler and more
Project description
youngjin-langchain-tools
youngjin-langchain-tools is a collection of LangGraph utilities designed to simplify AI application development with Streamlit and other frameworks.
Features
- StreamlitLanggraphHandler: A drop-in replacement for the deprecated
StreamlitCallbackHandler, designed for LangGraph agents - Real-time Streaming: Stream agent responses with live token updates
- Tool Visualization: Display tool calls and results with expandable UI components
- LangSmith Integration: Built-in support for LangSmith feedback collection via
run_idtracking - Configurable: Customize display options, labels, and behavior
Installation
pip install youngjin-langchain-tools
Or using uv:
uv add youngjin-langchain-tools
With Streamlit support:
pip install youngjin-langchain-tools[streamlit]
Quick Start
Basic Usage with LangGraph Agent
import streamlit as st
from langgraph.checkpoint.memory import InMemorySaver
from langchain.agents import create_agent
from youngjin_langchain_tools import StreamlitLanggraphHandler
# Create your LangGraph agent
agent = create_agent(
model=llm,
tools=tools,
checkpointer=InMemorySaver(),
)
# In your Streamlit app
with st.chat_message("assistant"):
handler = StreamlitLanggraphHandler(
container=st.container(),
expand_new_thoughts=True
)
response = handler.invoke(
agent=agent,
input={"messages": [{"role": "user", "content": prompt}]},
config={"configurable": {"thread_id": thread_id}}
)
# response contains the final text
Before & After Comparison
Before (LangChain < 1.0 with AgentExecutor):
from langchain.callbacks import StreamlitCallbackHandler
with st.chat_message("assistant"):
st_cb = StreamlitCallbackHandler(st.container(), expand_new_thoughts=True)
response = agent_executor.invoke(
{"input": prompt},
config=RunnableConfig({"callbacks": [st_cb]})
)
st.write(response["output"])
After (LangGraph with StreamlitLanggraphHandler):
from youngjin_langchain_tools import StreamlitLanggraphHandler
with st.chat_message("assistant"):
handler = StreamlitLanggraphHandler(st.container(), expand_new_thoughts=True)
response = handler.invoke(
agent=langgraph_agent,
input={"messages": [{"role": "user", "content": prompt}]},
config={"configurable": {"thread_id": thread_id}}
)
# response is the final text directly
Advanced Usage with Custom Configuration
from youngjin_langchain_tools import (
StreamlitLanggraphHandler,
StreamlitLanggraphHandlerConfig
)
# Create custom configuration
config = StreamlitLanggraphHandlerConfig(
expand_new_thoughts=True,
max_tool_content_length=3000,
show_tool_calls=True,
show_tool_results=True,
thinking_label="🧠 Processing...",
complete_label="✨ Done!",
tool_call_emoji="⚡",
tool_complete_emoji="✓",
cursor="█",
)
handler = StreamlitLanggraphHandler(
container=st.container(),
config=config
)
# Use stream() for more control
for event in handler.stream(agent, input, config):
if event["type"] == "tool_call":
print(f"Tool called: {event['data']['name']}")
elif event["type"] == "token":
# Custom token handling
pass
final_response = handler.get_response()
LangSmith Feedback Integration
The handler automatically tracks run_id for LangSmith feedback collection. This enables users to provide feedback on agent responses.
import streamlit as st
from langsmith import Client
from streamlit_feedback import streamlit_feedback
from youngjin_langchain_tools import StreamlitLanggraphHandler
# Create handler with LangSmith integration (enabled by default)
handler = StreamlitLanggraphHandler(
container=st.container(),
enable_langsmith=True, # Default
langsmith_run_name="customer_support_agent", # Optional custom name
)
# Invoke the agent
response = handler.invoke(agent, input, config)
# After execution, run_id is available for feedback
if handler.run_id:
st.session_state["run_id"] = handler.run_id
print(f"Run ID: {handler.run_id}")
# Use run_id with LangSmith feedback
def add_feedback():
run_id = st.session_state.get("run_id")
if not run_id:
st.info("No run_id available for feedback.")
return
feedback = streamlit_feedback(
feedback_type="thumbs",
optional_text_label="Leave a comment",
key=f"feedback_{run_id}",
)
if feedback:
langsmith_client = Client()
score = 1 if feedback["score"] == "👍" else 0
langsmith_client.create_feedback(
run_id,
f"thumbs {feedback['score']}",
score=score,
comment=feedback.get("text"),
)
st.success("Feedback submitted!")
add_feedback()
To disable LangSmith integration:
handler = StreamlitLanggraphHandler(
container=st.container(),
enable_langsmith=False, # Disable run_id tracking
)
API Reference
StreamlitLanggraphHandler
Main handler class for streaming LangGraph agents in Streamlit.
Constructor Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
container |
Any | required | Streamlit container to render in |
expand_new_thoughts |
bool | True |
Expand status container for tool calls |
max_tool_content_length |
int | 2000 |
Max chars of tool output to display |
show_tool_calls |
bool | True |
Show tool call information |
show_tool_results |
bool | True |
Show tool execution results |
thinking_label |
str | "🤔 Thinking..." |
Label while processing |
complete_label |
str | "✅ Complete!" |
Label when complete |
enable_langsmith |
bool | True |
Enable LangSmith run_id tracking |
langsmith_project |
str | None |
LangSmith project name |
langsmith_run_name |
str | "streamlit_agent_run" |
Name for the LangSmith run |
config |
Config | None |
Optional config object |
Methods
| Method | Description |
|---|---|
invoke(agent, input, config) |
Execute agent and return final response |
stream(agent, input, config) |
Generator yielding streaming events |
get_response() |
Get accumulated response text |
Properties
| Property | Type | Description |
|---|---|---|
run_id |
Optional[str] |
LangSmith run ID for feedback collection |
config |
StreamlitLanggraphHandlerConfig |
Handler configuration |
StreamlitLanggraphHandlerConfig
Configuration dataclass for handler customization.
Architecture
youngjin_langchain_tools/
├── __init__.py # Package exports
├── handlers/ # UI framework handlers
│ ├── __init__.py
│ └── streamlit_langgraph_handler.py
└── utils/ # Utility functions
├── __init__.py
└── config.py
Requirements
- Python 3.12+
- LangGraph 0.2+
- Streamlit 1.30+ (optional, for StreamlitLanggraphHandler)
- LangSmith (optional, for feedback integration)
License
Apache License 2.0 - see LICENSE for details.
Contributing
Contributions are welcome! Please feel free to submit a Pull Request.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file youngjin_langchain_tools-0.3.4.tar.gz.
File metadata
- Download URL: youngjin_langchain_tools-0.3.4.tar.gz
- Upload date:
- Size: 18.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2ef3c0380e1ea1f53baec69d04784575571b739ce22faab1d8a3fb03233592af
|
|
| MD5 |
cb792043d3a591a74fabaa4bd5260286
|
|
| BLAKE2b-256 |
9fb9419cc1c4b5f53c9454f2fa3849ec448bc3b3789c7e0bfafb2918a5c64c9c
|
Provenance
The following attestation bundles were made for youngjin_langchain_tools-0.3.4.tar.gz:
Publisher:
publish.yml on CocoRoF/youngjin-langchain-tools
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
youngjin_langchain_tools-0.3.4.tar.gz -
Subject digest:
2ef3c0380e1ea1f53baec69d04784575571b739ce22faab1d8a3fb03233592af - Sigstore transparency entry: 1059446082
- Sigstore integration time:
-
Permalink:
CocoRoF/youngjin-langchain-tools@ebdc44f3feeb894d17a32e0eff91f756d30ce362 -
Branch / Tag:
refs/heads/deploy - Owner: https://github.com/CocoRoF
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ebdc44f3feeb894d17a32e0eff91f756d30ce362 -
Trigger Event:
push
-
Statement type:
File details
Details for the file youngjin_langchain_tools-0.3.4-py3-none-any.whl.
File metadata
- Download URL: youngjin_langchain_tools-0.3.4-py3-none-any.whl
- Upload date:
- Size: 18.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
982fc12e3823de689c7f3d7c423f6f79401a6a08945640403b6e1e5a74845c41
|
|
| MD5 |
6d69ce99192321fa118da1b6245ab341
|
|
| BLAKE2b-256 |
e86eff932515ec87a633ba73f5cea2eb37ae2d9422a6c000f560cdfd0a38d4cf
|
Provenance
The following attestation bundles were made for youngjin_langchain_tools-0.3.4-py3-none-any.whl:
Publisher:
publish.yml on CocoRoF/youngjin-langchain-tools
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
youngjin_langchain_tools-0.3.4-py3-none-any.whl -
Subject digest:
982fc12e3823de689c7f3d7c423f6f79401a6a08945640403b6e1e5a74845c41 - Sigstore transparency entry: 1059446088
- Sigstore integration time:
-
Permalink:
CocoRoF/youngjin-langchain-tools@ebdc44f3feeb894d17a32e0eff91f756d30ce362 -
Branch / Tag:
refs/heads/deploy - Owner: https://github.com/CocoRoF
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@ebdc44f3feeb894d17a32e0eff91f756d30ce362 -
Trigger Event:
push
-
Statement type: