Skip to main content

A model-driven approach to building AI agents in just a few lines of code

Project description

Strands Agents

A model-driven approach to building AI agents in just a few lines of code.

GitHub commit activity GitHub open issues GitHub open pull requests License PyPI version Python versions

DocumentationSamplesPython SDKToolsAgent BuilderMCP Server

Strands Agents is a simple yet powerful SDK that takes a model-driven approach to building and running AI agents. From simple conversational assistants to complex autonomous workflows, from local development to production deployment, Strands Agents scales with your needs.

Feature Overview

  • Lightweight & Flexible: Simple agent loop that just works and is fully customizable
  • Model Agnostic: Support for Amazon Bedrock, Anthropic, Gemini, LiteLLM, Llama, Ollama, OpenAI, Writer, and custom providers
  • Advanced Capabilities: Multi-agent systems, autonomous agents, and streaming support
  • Built-in MCP: Native support for Model Context Protocol (MCP) servers, enabling access to thousands of pre-built tools

Quick Start

# Install Strands Agents
pip install strands-agents strands-agents-tools
from strands import Agent
from strands_tools import calculator
agent = Agent(tools=[calculator])
agent("What is the square root of 1764")

Note: For the default Amazon Bedrock model provider, you'll need AWS credentials configured and model access enabled for Claude 4 Sonnet in the us-west-2 region. See the Quickstart Guide for details on configuring other model providers.

Installation

Ensure you have Python 3.10+ installed, then:

# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate  # On Windows use: .venv\Scripts\activate

# Install Strands and tools
pip install strands-agents strands-agents-tools

Features at a Glance

Python-Based Tools

Easily build tools using Python decorators:

from strands import Agent, tool

@tool
def word_count(text: str) -> int:
    """Count words in text.

    This docstring is used by the LLM to understand the tool's purpose.
    """
    return len(text.split())

agent = Agent(tools=[word_count])
response = agent("How many words are in this sentence?")

Hot Reloading from Directory: Enable automatic tool loading and reloading from the ./tools/ directory:

from strands import Agent

# Agent will watch ./tools/ directory for changes
agent = Agent(load_tools_from_directory=True)
response = agent("Use any tools you find in the tools directory")

MCP Support

Seamlessly integrate Model Context Protocol (MCP) servers:

from strands import Agent
from strands.tools.mcp import MCPClient
from mcp import stdio_client, StdioServerParameters

aws_docs_client = MCPClient(
    lambda: stdio_client(StdioServerParameters(command="uvx", args=["awslabs.aws-documentation-mcp-server@latest"]))
)

with aws_docs_client:
   agent = Agent(tools=aws_docs_client.list_tools_sync())
   response = agent("Tell me about Amazon Bedrock and how to use it with Python")

Multiple Model Providers

Support for various model providers:

from strands import Agent
from strands.models import BedrockModel
from strands.models.ollama import OllamaModel
from strands.models.llamaapi import LlamaAPIModel
from strands.models.gemini import GeminiModel
from strands.models.llamacpp import LlamaCppModel

# Bedrock
bedrock_model = BedrockModel(
  model_id="us.amazon.nova-pro-v1:0",
  temperature=0.3,
  streaming=True, # Enable/disable streaming
)
agent = Agent(model=bedrock_model)
agent("Tell me about Agentic AI")

# Google Gemini
gemini_model = GeminiModel(
  client_args={
    "api_key": "your_gemini_api_key",
  },
  model_id="gemini-2.5-flash",
  params={"temperature": 0.7}
)
agent = Agent(model=gemini_model)
agent("Tell me about Agentic AI")

# Ollama
ollama_model = OllamaModel(
  host="http://localhost:11434",
  model_id="llama3"
)
agent = Agent(model=ollama_model)
agent("Tell me about Agentic AI")

# Llama API
llama_model = LlamaAPIModel(
    model_id="Llama-4-Maverick-17B-128E-Instruct-FP8",
)
agent = Agent(model=llama_model)
response = agent("Tell me about Agentic AI")

Built-in providers:

Custom providers can be implemented using Custom Providers

Example tools

Strands offers an optional strands-agents-tools package with pre-built tools for quick experimentation:

from strands import Agent
from strands_tools import calculator
agent = Agent(tools=[calculator])
agent("What is the square root of 1764")

It's also available on GitHub via strands-agents/tools.

Bidirectional Streaming

⚠️ Experimental Feature: Bidirectional streaming is currently in experimental status. APIs may change in future releases as we refine the feature based on user feedback and evolving model capabilities.

Build real-time voice and audio conversations with persistent streaming connections. Unlike traditional request-response patterns, bidirectional streaming maintains long-running conversations where users can interrupt, provide continuous input, and receive real-time audio responses. Get started with your first BidiAgent by following the Quickstart guide.

Supported Model Providers:

  • Amazon Nova Sonic (amazon.nova-sonic-v1:0)
  • Google Gemini Live (gemini-2.5-flash-native-audio-preview-09-2025)
  • OpenAI Realtime API (gpt-realtime)

Quick Example:

import asyncio
from strands.experimental.bidi import BidiAgent
from strands.experimental.bidi.models import BidiNovaSonicModel
from strands.experimental.bidi.io import BidiAudioIO, BidiTextIO
from strands.experimental.bidi.tools import stop_conversation
from strands_tools import calculator

async def main():
    # Create bidirectional agent with audio model
    model = BidiNovaSonicModel()
    agent = BidiAgent(model=model, tools=[calculator, stop_conversation])

    # Setup audio and text I/O
    audio_io = BidiAudioIO()
    text_io = BidiTextIO()

    # Run with real-time audio streaming
    # Say "stop conversation" to gracefully end the conversation
    await agent.run(
        inputs=[audio_io.input()],
        outputs=[audio_io.output(), text_io.output()]
    )

if __name__ == "__main__":
    asyncio.run(main())

Configuration Options:

# Configure audio settings
model = BidiNovaSonicModel(
    provider_config={
        "audio": {
            "input_rate": 16000,
            "output_rate": 16000,
            "voice": "matthew"
        },
        "inference": {
            "max_tokens": 2048,
            "temperature": 0.7
        }
    }
)

# Configure I/O devices
audio_io = BidiAudioIO(
    input_device_index=0,  # Specific microphone
    output_device_index=1,  # Specific speaker
    input_buffer_size=10,
    output_buffer_size=10
)

Documentation

For detailed guidance & examples, explore our documentation:

Contributing ❤️

We welcome contributions! See our Contributing Guide for details on:

  • Reporting bugs & features
  • Development setup
  • Contributing via Pull Requests
  • Code of Conduct
  • Reporting of security issues

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.

Security

See CONTRIBUTING for more information.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

strands_agents-1.22.0.tar.gz (684.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

strands_agents-1.22.0-py3-none-any.whl (328.6 kB view details)

Uploaded Python 3

File details

Details for the file strands_agents-1.22.0.tar.gz.

File metadata

  • Download URL: strands_agents-1.22.0.tar.gz
  • Upload date:
  • Size: 684.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for strands_agents-1.22.0.tar.gz
Algorithm Hash digest
SHA256 73d1eda459236d5e5c753cb12ea8dc49d7ac18bac70245ef905e86b60b452ce0
MD5 34fe9110068d8bf65f24c324e2cc2205
BLAKE2b-256 d072da0344b0a26b45c33c38078cf14fd5142bdc6ae67bb43b8c37fe57937c22

See more details on using hashes here.

Provenance

The following attestation bundles were made for strands_agents-1.22.0.tar.gz:

Publisher: pypi-publish-on-release.yml on strands-agents/sdk-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file strands_agents-1.22.0-py3-none-any.whl.

File metadata

  • Download URL: strands_agents-1.22.0-py3-none-any.whl
  • Upload date:
  • Size: 328.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.7

File hashes

Hashes for strands_agents-1.22.0-py3-none-any.whl
Algorithm Hash digest
SHA256 6b2737fff9bf5811a0d1c01f05d2351394879f874eaa23873bef70b17e9b4d9e
MD5 2638eb27b73fb8deee8cdbf28b6b539b
BLAKE2b-256 50ed9c287441432692d79a8b98b26e1017787596e7981d330a341153513c0a6c

See more details on using hashes here.

Provenance

The following attestation bundles were made for strands_agents-1.22.0-py3-none-any.whl:

Publisher: pypi-publish-on-release.yml on strands-agents/sdk-python

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page