Skip to main content

AWS (Bedrock) LLM integration for Vision Agents

Project description

AWS Plugin for Vision Agents

AWS (Bedrock) integration for Vision Agents framework with support for standard LLM, realtime with Nova Sonic, and text-to-speech with automatic session resumption.

Installation

uv add vision-agents[aws]

Usage

Standard LLM Usage

The AWS plugin supports various Bedrock models including Qwen, Claude, and others. Claude models also support vision/image inputs.

from vision_agents.core import Agent, User
from vision_agents.plugins import aws, getstream, cartesia, deepgram, smart_turn

agent = Agent(
    edge=getstream.Edge(),
    agent_user=User(name="Friendly AI"),
    instructions="Be nice to the user",
    llm=aws.LLM(
        model="qwen.qwen3-32b-v1:0",
        region_name="us-east-1"
    ),
    tts=cartesia.TTS(),
    stt=deepgram.STT(),
    turn_detection=smart_turn.TurnDetection(buffer_duration=2.0, confidence_threshold=0.5),
)

For vision-capable models like Claude:

llm = aws.LLM(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    region_name="us-east-1"
)

# Send image with text
response = await llm.converse(
    messages=[{
        "role": "user",
        "content": [
            {"image": {"format": "png", "source": {"bytes": image_bytes}}},
            {"text": "What do you see in this image?"}
        ]
    }]
)

Realtime Audio Usage

AWS Nova 2 Sonic provides realtime speech-to-speech capabilities with automatic reconnection logic. The default model is amazon.nova-2-sonic-v1:0.

from vision_agents.core import Agent, User
from vision_agents.plugins import aws, getstream

agent = Agent(
    edge=getstream.Edge(),
    agent_user=User(name="Story Teller AI"),
    instructions="Tell a story suitable for a 7 year old about a dragon and a princess",
    llm=aws.Realtime(
        model="amazon.nova-2-sonic-v1:0",
        region_name="us-east-1",
        voice_id="matthew"  # See available voices in AWS Nova documentation
    ),
)

The Realtime implementation includes automatic reconnection logic that reconnects after periods of silence or when approaching connection time limits.

See example/aws_realtime_nova_example.py for a complete example.

Text-to-Speech (TTS)

AWS Polly TTS is available for converting text to speech:

from vision_agents.plugins import aws

tts = aws.TTS(
    region_name="us-east-1",
    voice_id="Joanna",  # AWS Polly voice ID
    engine="neural",  # 'standard' or 'neural'
    text_type="text",  # 'text' or 'ssml'
    language_code="en-US"
)

# Use in agent
agent = Agent(
    llm=aws.LLM(model="qwen.qwen3-32b-v1:0"),
    tts=tts,
    # ... other components
)

Function Calling

Standard LLM (aws.LLM)

The standard LLM implementation fully supports function calling. Register functions using the @llm.register_function decorator:

from vision_agents.plugins import aws

llm = aws.LLM(
    model="qwen.qwen3-32b-v1:0",
    region_name="us-east-1"
)

@llm.register_function(
    name="get_weather",
    description="Get the current weather for a given city"
)
def get_weather(city: str) -> dict:
    """Get weather information for a city."""
    return {
        "city": city,
        "temperature": 72,
        "condition": "Sunny"
    }

Realtime (aws.Realtime)

The Realtime implementation fully supports function calling with AWS Nova 2 Sonic. Register functions using the @llm.register_function decorator:

from vision_agents.plugins import aws

llm = aws.Realtime(
    model="amazon.nova-2-sonic-v1:0",
    region_name="us-east-1",
    voice_id="matthew"
)

@llm.register_function(
    name="get_weather",
    description="Get the current weather for a given city"
)
def get_weather(city: str) -> dict:
    """Get weather information for a city."""
    return {
        "city": city,
        "temperature": 72,
        "condition": "Sunny"
    }

# The function will be automatically called when the model decides to use it

See example/aws_realtime_function_calling_example.py for a complete example.

Configuration

Environment Variables

Create a .env file with the following variables:

STREAM_API_KEY=your_stream_api_key_here
STREAM_API_SECRET=your_stream_api_secret_here

AWS_BEDROCK_API_KEY=
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_REGION=us-east-1

CARTESIA_API_KEY=
DEEPGRAM_API_KEY=

Make sure your .env file is configured before running the examples.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vision_agents_plugins_aws-0.3.7.tar.gz (17.7 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vision_agents_plugins_aws-0.3.7-py3-none-any.whl (24.8 kB view details)

Uploaded Python 3

File details

Details for the file vision_agents_plugins_aws-0.3.7.tar.gz.

File metadata

File hashes

Hashes for vision_agents_plugins_aws-0.3.7.tar.gz
Algorithm Hash digest
SHA256 d1800bdfc6f491ed820eabef71c4d994c29f1e7c093bc111b34a5843a2d2f80a
MD5 05f8a2f708ead107a1e8ef9ded88f1f1
BLAKE2b-256 a8b26f5f170fc9191b413e65f0b8bc527faf4e0385123f29b8a118024d021ed6

See more details on using hashes here.

File details

Details for the file vision_agents_plugins_aws-0.3.7-py3-none-any.whl.

File metadata

File hashes

Hashes for vision_agents_plugins_aws-0.3.7-py3-none-any.whl
Algorithm Hash digest
SHA256 10aa1a5dc77306cc2ed47c2402efbd0c3df21beda06f3e3c2bdd2e707cd3e676
MD5 fe7cf32a60f963e5439d5e206b30b975
BLAKE2b-256 45e63d096a024e9b195601eba2e644f12e31289eeaa3c249b2291f75002b0695

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page