Skip to main content

An adapter to integrate OpenAI Agents as LLM providers for LiveKit.

Project description

LiveKit OpenAI Agents Adapter

This library provides an adapter to integrate agents built with the openai-agents library (specifically, its agents module) as LLM providers within the LiveKit Agents framework.

It allows you to plug in your openai-agents based orchestrators, leveraging their capabilities like conversation history management, tool usage, and handoffs, while still utilizing the real-time audio and agent lifecycle management features of LiveKit Agents.

Features

  • Integrates agents built with the openai-agents library into LiveKit.
  • Facilitates the use of openai-agents features (like handoffs, structured output if designed in your agent) within LiveKit.
  • Configurable streaming: Choose between the original non-streaming approach (default) or real-time streaming responses.
  • Includes a utility to extract the last user message from the chat context for the agent.

Installation

pip install livekit_openai_agents

(Note: Ensure this package name matches your published package on PyPI.)

Alternatively, if you have this project cloned:

pip install .

Dependencies

Based on pyproject.toml:

  • livekit-agents[openai,elevenlabs,silero,turn-detector] == 1.0.17 (or your current version)
  • livekit-plugins-noise-cancellation~=0.2
  • openai-agents >= 0.0.14
  • pyee >= 9.0.0

Please refer to your pyproject.toml for the most up-to-date list of dependencies.

Usage

1. Define Your openai-agents Agent

First, create your agent(s) using the agents library. For example:

# your_openai_agents.py
from agents import Agent
from pydantic import BaseModel

class MathResponse(BaseModel):
    explanation: str
    answer: float

math_tutor_agent = Agent(
    name="MathTutor",
    description="A specialized agent that helps with math problems and provides explanations.",
    instructions="You are a math tutor. Explain your reasoning step-by-step and provide the final answer.",
    output_type=MathResponse, # Example of structured output
    # You can add handoffs, tools, etc. as per openai-agents documentation
)

# You might have other agents or a more complex setup, e.g., a triage agent
# from agents import Runner
# result = await Runner.run(math_tutor_agent, "What is 2+2?")
# print(result.final_output) # Output: MathResponse(explanation='...', answer=4.0)

2. Use the Adapter in Your LiveKit Agent

Import OpenAIAgentAdapter from this library and your openai-agents agent. Then, initialize the adapter and use it in your LiveKit AgentSession.

# your_livekit_app.py
import asyncio
from dotenv import load_dotenv, find_dotenv

from livekit import agents
from livekit.agents import Agent as LiveKitAgent, AgentSession, RoomInputOptions # Renamed to avoid clash
from livekit.plugins import openai, silero # or your preferred STT/TTS/VAD

# Import the adapter and your openai-agent
from livekit_openai_agents.adapter import OpenAIAgentAdapter # Assuming 'livekit_openai_agents' is the package name
from your_openai_agents import math_tutor_agent # The agent defined in step 1

# Load .env for API keys if necessary
# load_dotenv(find_dotenv())

class MyLiveKitAssistant(LiveKitAgent): # Renamed to avoid clash with openai-agents' Agent
    def __init__(self) -> None:
        super().__init__(instructions="You are a helpful voice AI assistant that can call specialized tutors.")

async def entrypoint(ctx: agents.JobContext):
    await ctx.connect()

    # 1. Initialize your openai-agent (it's already defined, just use the instance)
    # math_tutor_agent is already an instance

    # 2. Initialize the adapter with your openai-agent instance
    # streaming=False (default): Uses the original non-streaming approach
    # streaming=True: Enables real-time streaming responses
    openai_agent_llm_adapter = OpenAIAgentAdapter(
        orchestrator=math_tutor_agent,
        streaming=False  # Default behavior - uses original non-streaming approach
    )

    # To enable streaming, you can set streaming=True:
    # openai_agent_llm_adapter = OpenAIAgentAdapter(
    #     orchestrator=math_tutor_agent,
    #     streaming=True  # Enable streaming for real-time responses
    # )

    # You can also change streaming mode dynamically:
    # openai_agent_llm_adapter.set_streaming(True)   # Enable streaming
    # openai_agent_llm_adapter.set_streaming(False)  # Disable streaming (back to original approach)
    
    # Check current streaming status:
    # is_streaming = openai_agent_llm_adapter.is_streaming_enabled()

    # 3. Set up the AgentSession
    session = AgentSession(
        stt=openai.STT(), # Replace with your STT
        llm=openai_agent_llm_adapter, # Use the adapter here
        tts=openai.TTS(), # Replace with your TTS
        vad=silero.VAD.load(),
    )

    await session.start(
        room=ctx.room,
        agent=MyLiveKitAssistant(),
        # ... other options
    )

    # Example: Generate an initial greeting
    await session.generate_reply(
        instructions="Greet the user and ask how you can help."
    )

    print("Agent is ready and listening.")

if __name__ == "__main__":
    # Load .env for API keys (e.g., OPENAI_API_KEY, LIVEKIT_URL, LIVEKIT_API_KEY)
    dotenv_path = find_dotenv()
    if dotenv_path:
        print(f"Loading .env file from: {dotenv_path}")
        load_dotenv(dotenv_path)
    else:
        print("No .env file found. Ensure API keys and LiveKit connection info are set as environment variables.")
    
    agents.cli.run_app(agents.WorkerOptions(entrypoint_fnc=entrypoint))

3. Streaming Configuration

The OpenAIAgentAdapter supports both the original non-streaming approach and an optional streaming mode:

Non-Streaming Mode (Default)

# Use original non-streaming approach (default behavior)
adapter = OpenAIAgentAdapter(orchestrator=your_agent, streaming=False)
  • Uses the original implementation approach
  • Responses are delivered as complete messages
  • Maintains backward compatibility
  • Uses Runner.run() internally

Streaming Mode (Optional)

# Enable streaming responses
adapter = OpenAIAgentAdapter(orchestrator=your_agent, streaming=True)
  • Responses are delivered in real-time as they are generated
  • Users see text appearing progressively
  • Better user experience for longer responses
  • Uses Runner.run_streamed() internally

Dynamic Streaming Control

adapter = OpenAIAgentAdapter(orchestrator=your_agent)  # Default: streaming=False

# Change streaming mode at runtime
adapter.set_streaming(True)   # Enable streaming
adapter.set_streaming(False)  # Disable streaming (back to original approach)

# Check current streaming status
if adapter.is_streaming_enabled():
    print("Streaming is enabled")
else:
    print("Using original non-streaming approach")

See the examples/ directory (e.g., examples/tutors/adapter_example.py) for a more detailed, runnable example.

API Reference

OpenAIAgentAdapter

Constructor Parameters

  • orchestrator: The OpenAI Agents Agent instance to adapt
  • guardrail_handler: Optional function to handle guardrail trips
  • context: Optional context to provide to the agent
  • streaming: Whether to enable streaming responses (default: False)

Methods

  • set_streaming(streaming: bool): Sets the streaming mode for future chat calls
  • is_streaming_enabled() -> bool: Returns whether streaming is currently enabled
  • chat(...): Creates a chat stream (uses streaming setting)
  • generate(...): Generates a response string (always non-streaming)

Development

To set up for development:

git clone https://github.com/anilaltuner/livekit-openai-agents.git
cd livekit-openai-agents
pip install -e .

License

This project is licensed under the MIT License - see the LICENSE file for details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

livekit_openai_agents-0.2.3.tar.gz (10.8 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

livekit_openai_agents-0.2.3-py3-none-any.whl (9.1 kB view details)

Uploaded Python 3

File details

Details for the file livekit_openai_agents-0.2.3.tar.gz.

File metadata

  • Download URL: livekit_openai_agents-0.2.3.tar.gz
  • Upload date:
  • Size: 10.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/2.1.3 CPython/3.11.11 Darwin/23.6.0

File hashes

Hashes for livekit_openai_agents-0.2.3.tar.gz
Algorithm Hash digest
SHA256 1e08c00fcad8078d0a87a97828e6de3abccb6ba44dda21af60d4fb8eb89aeb2f
MD5 c56ec818baca45da9ac8d51b5c11e12f
BLAKE2b-256 fa59e5c0855990c7c38e670d55bbee486fa9f00b2b18780807fe793ccc5042de

See more details on using hashes here.

File details

Details for the file livekit_openai_agents-0.2.3-py3-none-any.whl.

File metadata

File hashes

Hashes for livekit_openai_agents-0.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 d5f4fa8eec8dcb570352a8f28be9589d0075a05867ddcd8e0cf131233d94eea3
MD5 45d6a409cff70c1648da31d5f66e2140
BLAKE2b-256 f7b63da2ad190fbc26e7c047eb38f3876ddce274da45d0b9a935461f86e80f29

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page