Skip to main content

AWS (Bedrock) LLM integration for Vision Agents

Project description

AWS Plugin for Vision Agents

AWS (Bedrock) integration for Vision Agents framework with support for standard LLM, realtime with Nova Sonic, and text-to-speech with automatic session resumption.

Installation

uv add vision-agents[aws]

Usage

Standard LLM Usage

The AWS plugin supports various Bedrock models including Qwen, Claude, and others. Claude models also support vision/image inputs.

from vision_agents.core import Agent, User
from vision_agents.plugins import aws, getstream, cartesia, deepgram, smart_turn

agent = Agent(
    edge=getstream.Edge(),
    agent_user=User(name="Friendly AI"),
    instructions="Be nice to the user",
    llm=aws.LLM(
        model="qwen.qwen3-32b-v1:0",
        region_name="us-east-1"
    ),
    tts=cartesia.TTS(),
    stt=deepgram.STT(),
    turn_detection=smart_turn.TurnDetection(buffer_duration=2.0, confidence_threshold=0.5),
)

For vision-capable models like Claude:

llm = aws.LLM(
    model="anthropic.claude-3-haiku-20240307-v1:0",
    region_name="us-east-1"
)

# Send image with text
response = await llm.converse(
    messages=[{
        "role": "user",
        "content": [
            {"image": {"format": "png", "source": {"bytes": image_bytes}}},
            {"text": "What do you see in this image?"}
        ]
    }]
)

Realtime Audio Usage

AWS Nova 2 Sonic provides realtime speech-to-speech capabilities with automatic reconnection logic. The default model is amazon.nova-2-sonic-v1:0.

from vision_agents.core import Agent, User
from vision_agents.plugins import aws, getstream

agent = Agent(
    edge=getstream.Edge(),
    agent_user=User(name="Story Teller AI"),
    instructions="Tell a story suitable for a 7 year old about a dragon and a princess",
    llm=aws.Realtime(
        model="amazon.nova-2-sonic-v1:0",
        region_name="us-east-1",
        voice_id="matthew"  # See available voices in AWS Nova documentation
    ),
)

The Realtime implementation includes automatic reconnection logic that reconnects after periods of silence or when approaching connection time limits.

See example/aws_realtime_nova_example.py for a complete example.

Text-to-Speech (TTS)

AWS Polly TTS is available for converting text to speech:

from vision_agents.plugins import aws

tts = aws.TTS(
    region_name="us-east-1",
    voice_id="Joanna",  # AWS Polly voice ID
    engine="neural",  # 'standard' or 'neural'
    text_type="text",  # 'text' or 'ssml'
    language_code="en-US"
)

# Use in agent
agent = Agent(
    llm=aws.LLM(model="qwen.qwen3-32b-v1:0"),
    tts=tts,
    # ... other components
)

Function Calling

Standard LLM (aws.LLM)

The standard LLM implementation fully supports function calling. Register functions using the @llm.register_function decorator:

from vision_agents.plugins import aws

llm = aws.LLM(
    model="qwen.qwen3-32b-v1:0",
    region_name="us-east-1"
)

@llm.register_function(
    name="get_weather",
    description="Get the current weather for a given city"
)
def get_weather(city: str) -> dict:
    """Get weather information for a city."""
    return {
        "city": city,
        "temperature": 72,
        "condition": "Sunny"
    }

Realtime (aws.Realtime)

The Realtime implementation fully supports function calling with AWS Nova 2 Sonic. Register functions using the @llm.register_function decorator:

from vision_agents.plugins import aws

llm = aws.Realtime(
    model="amazon.nova-2-sonic-v1:0",
    region_name="us-east-1",
    voice_id="matthew"
)

@llm.register_function(
    name="get_weather",
    description="Get the current weather for a given city"
)
def get_weather(city: str) -> dict:
    """Get weather information for a city."""
    return {
        "city": city,
        "temperature": 72,
        "condition": "Sunny"
    }

# The function will be automatically called when the model decides to use it

See example/aws_realtime_function_calling_example.py for a complete example.

Configuration

Environment Variables

Create a .env file with the following variables:

STREAM_API_KEY=your_stream_api_key_here
STREAM_API_SECRET=your_stream_api_secret_here

AWS_BEDROCK_API_KEY=
AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
AWS_REGION=us-east-1

CARTESIA_API_KEY=
DEEPGRAM_API_KEY=

Make sure your .env file is configured before running the examples.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

vision_agents_plugins_aws-0.2.9.tar.gz (17.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

vision_agents_plugins_aws-0.2.9-py3-none-any.whl (24.4 kB view details)

Uploaded Python 3

File details

Details for the file vision_agents_plugins_aws-0.2.9.tar.gz.

File metadata

File hashes

Hashes for vision_agents_plugins_aws-0.2.9.tar.gz
Algorithm Hash digest
SHA256 07432f18a2112f156620961f23528f0963fd13d2b612c3d73dc95ea52d72ff95
MD5 eafd4f6b36e73c0a92e75ff657eb9367
BLAKE2b-256 c8b5095e5b58db14ccc9b0c34f8ac7e5c0db9b701e2eb30465a974cdb2922b14

See more details on using hashes here.

File details

Details for the file vision_agents_plugins_aws-0.2.9-py3-none-any.whl.

File metadata

File hashes

Hashes for vision_agents_plugins_aws-0.2.9-py3-none-any.whl
Algorithm Hash digest
SHA256 25e23988406217c096feecd66fa80706a6762cba8bf97c449a0fa876cb14f2c5
MD5 0dce18c5e2399fa3e10a5d0d8e310ddd
BLAKE2b-256 638f9063007f41fe048379a3b9e71ced5fd7a13576d742d7da503d818de24078

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page