Skip to main content

Google Gemini LLM integration for Vision Agents

Project description

Gemini Live Speech-to-Speech Plugin

Google Gemini Live Speech-to-Speech (STS) plugin for GetStream. It connects a realtime Gemini Live session to a Stream video call so your assistant can speak and listen in the same call.

Installation

uv add vision-agents[gemini]

Requirements

  • Python: 3.10+
  • Dependencies: getstream[webrtc"], getstream-plugins-common, google-genai>=1.51.0
  • API key: GOOGLE_API_KEY or GEMINI_API_KEY set in your environment

Quick Start

Below is a minimal example that attaches the Gemini Live output audio track to a Stream call and streams microphone audio into Gemini. The assistant will speak back into the call, and you can also send text messages to the assistant.

import asyncio
import os

from getstream import Stream
from getstream.plugins.gemini.live import GeminiLive
from getstream.video import rtc
from getstream.video.rtc.track_util import PcmData


async def main():
    # Ensure your key is set: export GOOGLE_API_KEY=... (or GEMINI_API_KEY)
    gemini = GeminiLive(
        api_key=os.getenv("GOOGLE_API_KEY"),
        model="gemini-live-2.5-flash-preview",
    )

    client = Stream.from_env()
    call = client.video.call("default", "your-call-id")

    async with await rtc.join(call, user_id="assistant-bot") as connection:
        # Route Gemini's synthesized speech back into the call
        await connection.add_tracks(audio=gemini.output_track)

        # Forward microphone PCM frames to Gemini in realtime
        @connection.on("audio")
        async def on_audio(pcm: PcmData):
            await gemini.send_audio_pcm(pcm, target_rate=48000)

        # Optionally send a kick-off text message
        await gemini.send_text("Give a short greeting to the participants.")

        # Keep the session running
        while True:
            await asyncio.sleep(1)


if __name__ == "__main__":
    asyncio.run(main())

Optional: forward remote participant video frames to Gemini for multimodal context:

# Forward remote video frames to Gemini (optional)
@connection.on("track_added")
async def _on_track_added(track_id, kind, user):
    if kind == "video" and connection.subscriber_pc:
        track = connection.subscriber_pc.add_track_subscriber(track_id)
        if track:
            await gemini.watch_video_track(track)

For a full runnable example, see examples/gemini_live/main.py.

Gemini Vision (VLM)

Use Gemini 3 vision models with the Agent API (video frames are forwarded automatically when the call has active video).

from vision_agents.core import Agent, Runner, User
from vision_agents.core.agents import AgentLauncher
from vision_agents.plugins import deepgram, elevenlabs, gemini, getstream

async def create_agent(**kwargs) -> Agent:
    vlm = gemini.VLM(model="gemini-3-flash-preview")
    return Agent(
        edge=getstream.Edge(),
        agent_user=User(name="Gemini Vision Agent", id="gemini-vision-agent"),
        instructions="Describe what you see in one sentence.",
        llm=vlm,
        stt=deepgram.STT(),
        tts=elevenlabs.TTS(),
    )

async def join_call(agent: Agent, call_type: str, call_id: str, **kwargs) -> None:
    call = await agent.create_call(call_type, call_id)
    async with agent.join(call):
        await agent.finish()

Runner(AgentLauncher(create_agent=create_agent, join_call=join_call)).cli()

Key configuration knobs for GeminiVLM: fps, frame_buffer_seconds, thinking_level, media_resolution. For a full example, see plugins/gemini/example/gemini_vlm_agent_example.py.

Features

  • Bidirectional audio: Streams microphone PCM to Gemini, and plays Gemini speech into the call using output_track.
  • Video frame forwarding: Sends remote participant video frames to Gemini Live for multimodal understanding. Use start_video_sender with a remote MediaStreamTrack.
  • Text messages: Use send_text to add text turns directly to the conversation.
  • Barge-in (interruptions): When the user starts speaking, current playback is interrupted so Gemini can focus on the new input. Playback automatically resumes after brief silence.
  • Auto resampling: send_audio_pcm will resample input frames to the target rate when needed.
  • Events: Subscribe to "audio" for synthesized audio chunks and "text" for assistant text.

API Overview

  • GeminiLive(api_key: str | None = None, model: str = "gemini-live-2.5-flash-preview", config: LiveConnectConfigDict | None = None): Create a new Gemini Live session. If api_key is not provided, the plugin reads GOOGLE_API_KEY or GEMINI_API_KEY from the environment.
  • GeminiVLM(model: str = "gemini-3-flash-preview", fps: int = 1, frame_buffer_seconds: int = 10, ...): Vision-language model that buffers video frames and sends them with prompts.
  • output_track: An AudioStreamTrack you can publish in your call via add_tracks(audio=...).
  • await send_text(text: str): Send a user text message to the current turn.
  • await send_audio_pcm(pcm: PcmData, target_rate: int = 48000): Stream PCM frames to Gemini. Frames are converted to the required format and resampled if necessary.
  • await wait_until_ready(timeout: float | None = None) -> bool: Wait until the underlying live session is connected.
  • await interrupt_playback() / resume_playback(): Manually stop or resume synthesized audio playback. Useful if you want to manage barge-in behavior yourself.
  • await start_video_sender(track: MediaStreamTrack, fps: int = 1): Start forwarding video frames from a remote MediaStreamTrack to Gemini Live at the given frame rate.
  • await stop_video_sender(): Stop the background video sender task, if running.
  • await close(): Close the session and background tasks.

Environment Variables

  • GOOGLE_API_KEY / GEMINI_API_KEY: Gemini API key. One must be set.
  • GEMINI_LIVE_MODEL: Optional override for the model name if you need a different variant.

Notes on Interruptions

  • How it works: The plugin detects user speech activity in incoming PCM and interrupts any ongoing playback. After a short period of silence, playback is enabled again so the assistant can speak.
  • Why it matters: This enables natural barge-in experiences, where users can cut off the assistant mid-sentence and ask follow-up questions.

Troubleshooting

  • No audio playback: Ensure you publish output_track to your call and the call is subscribed to the assistant's audio.
  • No responses: Verify GOOGLE_API_KEY/GEMINI_API_KEY is set and has access to the chosen model. Try a different model via model=.
  • Sample-rate issues: Use send_audio_pcm(..., target_rate=48000) to normalize input frames.

Migration from Gemini 2.5

When migrating to Gemini 3:

  • Thinking: If you were using complex prompt engineering (like Chain-of-thought) with Gemini 2.5, try Gemini 3 with thinking_level="high" and simplified prompts.
  • Temperature: If your code explicitly sets temperature to low values, consider removing it and using the Gemini 3 default (1.0) to avoid potential looping issues.
  • PDF & Document Understanding: Default OCR resolution for PDFs has changed. Test with media_resolution="high" if you need dense document parsing.
  • Token Consumption: Gemini 3 defaults may increase token usage for PDFs but decrease for video. If requests exceed context limits, explicitly reduce media_resolution.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vision_agents_plugins_gemini-0.4.0.tar.gz (23.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vision_agents_plugins_gemini-0.4.0-py3-none-any.whl (27.3 kB view details)

Uploaded Python 3

File details

Details for the file vision_agents_plugins_gemini-0.4.0.tar.gz.

File metadata

  • Download URL: vision_agents_plugins_gemini-0.4.0.tar.gz
  • Upload date:
  • Size: 23.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.6 {"installer":{"name":"uv","version":"0.10.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for vision_agents_plugins_gemini-0.4.0.tar.gz
Algorithm Hash digest
SHA256 7aa74a34cd0ade879235524002ca2fd2ee29394c375c40c48d0234488b991bcc
MD5 bfa3e84be24d3b8923adbccdf1f555bc
BLAKE2b-256 0cac2d938b0fbfcb821f755d2541d24215551846cf9346b25cdc94dda110cf57

See more details on using hashes here.

File details

Details for the file vision_agents_plugins_gemini-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: vision_agents_plugins_gemini-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 27.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: uv/0.10.6 {"installer":{"name":"uv","version":"0.10.6","subcommand":["publish"]},"python":null,"implementation":{"name":null,"version":null},"distro":{"name":"macOS","version":null,"id":null,"libc":null},"system":{"name":null,"release":null},"cpu":null,"openssl_version":null,"setuptools_version":null,"rustc_version":null,"ci":null}

File hashes

Hashes for vision_agents_plugins_gemini-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4215e6a86629245239a101947fee0e3ed4943da2d12f6de50974f5800bef2e7f
MD5 ede12bf136282c6040770578dfd775a3
BLAKE2b-256 a7e65d619da94ea783945ef8270e43e331eb44d37c61f4ef85f7aa8f8342ee38

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page