Universal LiveKit plugin for LangGraph workflows with intelligent filtering
Project description
LiveKit LangGraph Plugin
An enhanced LiveKit plugin for seamless integration with LangGraph workflows. This plugin provides universal filtering capabilities to ensure only user-facing responses are spoken by voice agents, while filtering out tool calls and intermediate workflow outputs.
🌟 Key Features
- Universal Tool Filtering: Automatically filters out tool calls, tool messages, and tool-related metadata
- Node-Based Filtering: Selectively allow messages from specific LangGraph nodes (optional)
- Real-Time Streaming: Preserves streaming tokens for smooth, real-time voice synthesis
- Workflow Support: Works seamlessly with any LangGraph workflow
- RemoteGraph & CompiledStateGraph: Supports both local and remote LangGraph execution
🚀 Quick Start
Installation
pip install livekit-plugins-langgraph
For development with examples:
pip install livekit-plugins-langgraph[examples]
Basic Usage
from livekit.agents import Agent, AgentSession
from livekit.plugins.langgraph import LLMAdapter
from langgraph.pregel.remote import RemoteGraph
# Connect to your LangGraph workflow
graph = RemoteGraph("your_workflow", url="http://localhost:2024")
# Create adapter with filtering
llm_adapter = LLMAdapter(
graph,
config={"configurable": {"thread_id": "unique_session_id"}},
langgraph_node="final_node" # Only allow specific node responses (optional)
)
# Use in LiveKit agent
agent = Agent(
instructions="You are a helpful AI assistant.",
llm=llm_adapter
)
📖 Detailed Usage
Node-Based Filtering
Control which nodes in your LangGraph workflow can speak:
# Only specific node responses
LLMAdapter(graph, langgraph_node="my_node")
# Multiple specific nodes
LLMAdapter(graph, langgraph_node=["node_a", "node_b"])
# No node filtering (only tool filtering)
LLMAdapter(graph, langgraph_node=None)
Note: Node names like "supervisor", "my_node", etc. are just examples. Use the actual node names from your specific LangGraph workflow.
Complete Voice Agent Example
import asyncio
from livekit.agents import Agent, AgentSession, JobContext
from livekit.plugins import deepgram, silero
from livekit.plugins.langgraph import LLMAdapter
from livekit.plugins.turn_detector.multilingual import MultilingualModel
from langgraph.pregel.remote import RemoteGraph
async def voice_agent_example(ctx: JobContext):
# Connect to LangGraph
graph = RemoteGraph("my_workflow", url="http://localhost:2024")
# Configure with filtering
llm_adapter = LLMAdapter(
graph,
config={"configurable": {"thread_id": "session_123"}},
langgraph_node="final_node" # Replace with your actual node name
)
# Create voice agent
agent = Agent(
instructions="You are a helpful AI assistant.",
llm=llm_adapter
)
# Set up voice session
session = AgentSession(
vad=silero.VAD.load(),
stt=deepgram.STT(model="nova-2", language="en"),
tts=deepgram.TTS(model="aura-asteria-en"),
turn_detection=MultilingualModel(),
)
# Start voice interaction
await session.start(agent=agent, room=ctx.room)
🔧 Configuration Options
LLMAdapter Parameters
| Parameter | Type | Description |
|---|---|---|
graph |
PregelProtocol |
LangGraph instance (RemoteGraph or CompiledStateGraph) |
config |
RunnableConfig |
Optional configuration for graph execution |
langgraph_node |
str | list[str] | None |
Node name(s) to filter by. Only messages from these nodes will be spoken |
Filtering Behavior
The plugin automatically filters out:
- ✅ Tool Calls: Function calls between workflow nodes
- ✅ Tool Messages: Responses from tools and external services
- ✅ Tool Metadata: Tool-related metadata and chunks
- ✅ Intermediate Outputs: Internal workflow communications
While preserving:
- ✅ Streaming Tokens: Real-time response building
- ✅ User-Facing Content: Final conversational responses
- ✅ Allowed Nodes: Messages from specified nodes only (if configured)
📁 Examples
Voice Assistant with LangGraph
See examples/multi_agent_voice.py for a complete example of:
- Setting up a voice-enabled LangGraph workflow
- Configuring node-based filtering
- Handling real-time voice interaction
- Integration with LangGraph RemoteGraph
Note: The example uses "supervisor" as a node name, but this should be replaced with the actual node names from your specific workflow.
Running the Example
- Set up LangGraph server with your workflow
- Configure environment:
# .env file OPENAI_API_KEY=your_openai_key LANGGRAPH_SERVER_URL=http://localhost:2024
- Run the voice agent:
python examples/multi_agent_voice.py dev
🏗️ Architecture
How It Works
- Message Processing: Converts LiveKit chat context to LangGraph format
- Streaming: Processes real-time message chunks from LangGraph
- Universal Filtering: Applies comprehensive filtering rules
- Node Filtering: Optionally filters by LangGraph node metadata
- Speech Synthesis: Passes clean content to LiveKit TTS
Filtering Pipeline
LangGraph Stream → Universal Filter → Node Filter → LiveKit TTS
↓ ↓
Blocks tools, Allows only
metadata specified nodes
🧪 Testing
The package includes comprehensive filtering tests:
# Run filtering verification
python -m pytest tests/
# Manual testing with your workflow
python examples/multi_agent_voice.py dev
🤝 Contributing
Contributions are welcome! Please:
- Fork the repository
- Create a feature branch
- Add tests for new functionality
- Ensure all tests pass
- Submit a pull request
Development Setup
git clone https://github.com/tim-yung/livekit-plugins-langgraph
cd livekit-plugins-langgraph
pip install -e .[dev]
📋 Requirements
- Python: 3.9+
- LiveKit Agents: 0.8.0+
- LangChain Core: 0.3.0+
- LangGraph: 0.2.0+
- LangGraph SDK: 0.1.0+
🐛 Troubleshooting
Common Issues
Agent is silent / no speech output
- Check that your LangGraph workflow is running
- Verify node names match your workflow (use
langgraph_node=Noneto test) - Ensure OpenAI API key is configured
Tool calls being spoken
- This should not happen with the plugin - please file an issue
Debug Mode
Enable detailed logging:
import logging
logging.getLogger("livekit.plugins.langgraph").setLevel(logging.DEBUG)
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
- LiveKit for the excellent real-time communication platform
- LangGraph for powerful multi-agent workflows
- The open-source community for inspiration and feedback
📞 Support
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: README
Made with ❤️ for the voice AI community
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file livekit_plugins_langgraph-0.1.1.tar.gz.
File metadata
- Download URL: livekit_plugins_langgraph-0.1.1.tar.gz
- Upload date:
- Size: 202.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f9e9d8dedf8c202d42da216d7c491fa4500580dae16dfde441a1612a9be9b0b5
|
|
| MD5 |
b41d7940424250a4799d7ab0d293f48d
|
|
| BLAKE2b-256 |
995679ef9c4a8c346957a55587069cd381ac268bc85386243f1a1d5454c32f9a
|
File details
Details for the file livekit_plugins_langgraph-0.1.1-py3-none-any.whl.
File metadata
- Download URL: livekit_plugins_langgraph-0.1.1-py3-none-any.whl
- Upload date:
- Size: 10.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.1.0 CPython/3.13.2
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
38f98ab1b7db2f9a9c09429cfa649457cb11d9af4b5533527d83cabb6ae95860
|
|
| MD5 |
488bcd0b356e7dd96dedcf7bb908090e
|
|
| BLAKE2b-256 |
7e18911df5ee0786b7cbb75d4d222656658982ae823819282f0d59abec7c86bb
|