Skip to main content

Sample Research Agent powered by Strands Agents SDK

Project description

Strands Research Agent

Powered by Strands Agents PyPI - Version Python Version

Autonomous research agent demonstrating advanced Strands Agents patterns including hot-reloading tools, multi-agent coordination, and persistent learning systems for enterprise research automation.

Feature Overview

  • Hot-Reloading Development: Create and modify tools without restarting - save .py files in ./tools/ for instant availability
  • Multi-Agent Orchestration: Background tasks, parallel processing, and model coordination across different providers
  • Persistent Learning: Cross-session knowledge accumulation via AWS Bedrock Knowledge Base and SQLite memory
  • Self-Modifying Systems: Dynamic behavior adaptation through the system_prompt tool and continuous improvement loops
graph LR
    subgraph TRADITIONAL["โŒ Traditional Development (Minutes/Hours)"]
        A["๐Ÿ”ง Modify Tool"] --> B["๐Ÿ”„ Restart Agent"]
        B --> C["๐Ÿงช Test Change"]
        C --> D["๐Ÿ› Debug Issues"]
        D --> A
    end
    
    subgraph HOTRELOAD["โœ… Hot-Reload Development (Seconds)"]
        E["๐Ÿ’พ Save .py to ./tools/"] --> F["โšก Instant Loading"]
        F --> G["๐Ÿš€ Agent Uses Tool"]
        G --> H["๐Ÿ”ฌ Refine & Test"]
        H --> E
    end
    
    TRADITIONAL -.->|"Strands Research Agent"| HOTRELOAD
    
    style A fill:#ffcdd2,stroke:#d32f2f,stroke-width:2px,color:#000
    style B fill:#ffcdd2,stroke:#d32f2f,stroke-width:2px,color:#000
    style C fill:#ffcdd2,stroke:#d32f2f,stroke-width:2px,color:#000
    style D fill:#ffcdd2,stroke:#d32f2f,stroke-width:2px,color:#000
    
    style E fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#000
    style F fill:#81c784,stroke:#388e3c,stroke-width:3px,color:#000
    style G fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#000
    style H fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#000
    
    style TRADITIONAL fill:#ffebee,stroke:#d32f2f,stroke-width:2px
    style HOTRELOAD fill:#e8f5e8,stroke:#388e3c,stroke-width:2px

Quick Start

# Install the research agent
pip install strands-research-agent[all]

# Configure your model (Bedrock recommended)
export STRANDS_MODEL_ID="us.anthropic.claude-sonnet-4-20250514-v1:0"
export MODEL_PROVIDER="bedrock"

# Start interactive research
research-agent
# Agent creates its own tools and uses them immediately
agent("Create tools for competitive intelligence analysis and start researching AI agent frameworks")

# What happens behind the scenes:
# 1. Agent recognizes it needs specialized capabilities
# 2. Creates competitive_intel.py in ./tools/ (hot-loaded instantly)
# 3. Tool becomes available as agent.tool.competitive_intel()
# 4. Agent begins research using its newly created tool
# 5. Stores findings in knowledge base for future sessions
# 
# This is tool creation at the speed of thought - no restart, no manual coding

Installation

Ensure you have Python 3.10+ installed, then:

# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate  # On Windows use: .venv\Scripts\activate

# Install from PyPI
pip install strands-research-agent[all]

# Or clone for development
git clone https://github.com/strands-agents/samples.git
cd samples/02-samples/14-research-agent
pip install -e .[dev]

Configuration:

# Core configuration
export STRANDS_MODEL_ID="us.anthropic.claude-sonnet-4-20250514-v1:0"
export MODEL_PROVIDER="bedrock"

# Optional - for persistent learning
export STRANDS_KNOWLEDGE_BASE_ID="your_kb_id"
export AWS_REGION="us-west-2"

๐Ÿš€ Recommended Settings for Optimal Performance:

# Maximum performance settings for production research workloads
export STRANDS_MODEL_ID="us.anthropic.claude-sonnet-4-20250514-v1:0"
export STRANDS_ADDITIONAL_REQUEST_FIELDS='{"anthropic_beta": ["interleaved-thinking-2025-05-14", "context-1m-2025-08-07"], "thinking": {"type": "enabled", "budget_tokens": 2048}}'
export STRANDS_MAX_TOKENS="65536"

What these settings provide:

  • Enhanced Model: Claude 4 Sonnet with latest capabilities
  • Interleaved Thinking: Real-time reasoning during responses for better analysis
  • Extended Context: 1M token context window for complex research sessions
  • Thinking Budget: 2048 tokens for advanced reasoning cycles
  • Maximum Output: 65536 tokens for comprehensive research reports

Note: For the default Amazon Bedrock provider, you'll need AWS credentials configured and model access enabled for Claude 4 Sonnet in the us-west-2 region.

Features at a Glance

Hot-Reloading Tool Development

Automatically create and load tools from the ./tools/ directory:

# ./tools/competitive_intel.py
from strands import tool

@tool
def competitive_intel(company: str, domain: str = "ai-agents") -> dict:
    """Gather competitive intelligence on companies in specific domains.
    
    This docstring is used by the LLM to understand the tool's purpose.
    """
    # Tool implementation here - the agent wrote this code itself
    return {"status": "success", "analysis": f"Intelligence for {company} in {domain}"}

# The breakthrough: Save this file and it's instantly available
# No imports, no registration, no restart needed
# Just save โ†’ agent.tool.competitive_intel() exists immediately
#
# Traditional AI: Fixed capabilities, human-coded tools
# Research Agent: Self-expanding capabilities, AI-created tools

Multi-Agent Task Orchestration

Create background tasks with different models and specialized capabilities:

graph TD
    A["๐ŸŽฏ Research Query"] --> B{"๐Ÿง  Complexity Assessment"}
    
    B -->|"Simple"| C["โšก Direct Processing"]
    B -->|"Complex"| D["๐Ÿš€ Multi-Agent Coordination"]
    
    subgraph COORDINATION["๐Ÿค Coordination Strategies"]
        D --> E["๐Ÿ“‹ tasks: Background Processing"]
        D --> F["๐Ÿ”„ use_agent: Model Switching"]  
        D --> G["๐Ÿ‘ฅ swarm: Parallel Teams"]
        D --> H["๐Ÿ’ญ think: Multi-Cycle Reasoning"]
    end
    
    subgraph SPECIALISTS["๐ŸŽ“ Specialist Agents"]
        E --> I["๐Ÿ“Š Market Research Agent"]
        F --> J["โš™๏ธ Technical Analysis Agent"]
        G --> K["๐Ÿ”ฌ Specialist Team A"]
        G --> L["๐Ÿ› ๏ธ Specialist Team B"]
        H --> M["๐Ÿง  Deep Reasoning Cycles"]
    end
    
    I --> N["๐Ÿ”— Coordinated Results"]
    J --> N
    K --> N
    L --> N
    M --> N
    
    N --> O["๐Ÿ’ก Knowledge Integration"]
    O --> P["๐Ÿ”„ system_prompt: Self-Adaptation"]
    
    style A fill:#e3f2fd,stroke:#1976d2,stroke-width:3px,color:#000
    style B fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#000
    style C fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000
    style D fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
    
    style E fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#000
    style F fill:#f1f8e9,stroke:#558b2f,stroke-width:2px,color:#000
    style G fill:#fce4ec,stroke:#c2185b,stroke-width:2px,color:#000
    style H fill:#fff8e1,stroke:#ff8f00,stroke-width:2px,color:#000
    
    style I fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px,color:#000
    style J fill:#e0f2f1,stroke:#00695c,stroke-width:2px,color:#000
    style K fill:#fce4ec,stroke:#ad1457,stroke-width:2px,color:#000
    style L fill:#fce4ec,stroke:#ad1457,stroke-width:2px,color:#000
    style M fill:#fff3e0,stroke:#ef6c00,stroke-width:2px,color:#000
    
    style N fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px,color:#000
    style O fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000
    style P fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
    
    style COORDINATION fill:#fafafa,stroke:#424242,stroke-width:2px
    style SPECIALISTS fill:#f5f5f5,stroke:#616161,stroke-width:2px
from strands_research_agent.agent import create_agent

agent, mcp_client = create_agent()

with mcp_client:
    # The orchestration story: One brain, multiple specialists
    # Think of it like a research team where the lead researcher
    # (main agent) coordinates different experts working in parallel
    
    # Expert 1: Market Research Specialist (background task)
    agent.tool.tasks(
        action="create",
        task_id="market_research",
        prompt="Research AI agent market trends and competitive landscape",
        system_prompt="You are a market research analyst specializing in AI technologies.",
        tools=["scraper", "http_request", "store_in_kb"]
    )
    # This agent works independently, reports back when done

    # Expert 2: Technical Architect (different model, specialized brain)
    technical_analysis = agent.tool.use_agent(
        prompt="Analyze technical capabilities of top 5 AI agent frameworks",
        system_prompt="You are a senior software architect",
        model_provider="openai",  # Different AI model = different thinking style
        model_settings={"model_id": "gpt-4", "temperature": 0.2}
    )
    # Lower temperature = more analytical, precise thinking

    # The coordination: Experts share knowledge
    agent.tool.tasks(
        action="add_message",
        task_id="market_research", 
        message="Integrate technical analysis findings into market research"
    )
    # Knowledge flows between specialists, compound intelligence emerges

Dynamic Self-Modification

The agent can modify its own behavior during runtime:

# The evolution story: Agent learns and adapts its personality
# Like a researcher who gets better at research through experience

# Agent reflects: "I've learned something important about competitive analysis"
agent.tool.system_prompt(
    action="update",
    prompt="You are now a competitive intelligence specialist with deep knowledge of AI agent frameworks. Focus on technical differentiation and market positioning."
)
# The agent literally rewrites its own identity based on expertise gained

# The memory formation: Insights become institutional knowledge
agent.tool.store_in_kb(
    content="Key findings from competitive analysis research session...",
    title="AI Agent Framework Analysis - Q4 2024"
)
# Today's breakthrough becomes tomorrow's context
# This is how AI systems develop expertise over time

Meta-Agent Cascading Orchestration

The research agent demonstrates unique emergent intelligence patterns through recursive meta-tool usage:

graph TD
    subgraph LEVEL1["๐ŸŽ–๏ธ LEVEL 1: Primary Agent"]
        A["๐ŸŽฏ Primary Agent<br/>Research Coordinator"]
    end
    
    A --> B{"๐Ÿ” Complex Research Task<br/>Assessment"}
    
    subgraph LEVEL2["๐ŸŽ–๏ธ LEVEL 2: Sub-Agents"]
        B --> C["๐Ÿค– use_agent: Create Sub-Agent<br/>Market Analyst"]
        C --> D["๐Ÿ“Š Sub-Agent Processing<br/>Market Analysis"]
    end
    
    D --> E{"๐ŸŽš๏ธ Sub-Task Complexity?<br/>Need Deeper Analysis"}
    
    subgraph LEVEL3["๐ŸŽ–๏ธ LEVEL 3: Sub-Sub-Agents"]
        E -->|"High Complexity"| F["๐Ÿค– use_agent: Create Sub-Sub-Agent<br/>Technical Specialist"]
        E -->|"Medium Complexity"| G["๐Ÿ“‹ tasks: Background Processing<br/>Data Collection"]
        E -->|"Simple Tasks"| H["โšก Direct Processing<br/>Basic Analysis"]
    end
    
    subgraph LEVEL4["๐ŸŽ–๏ธ LEVEL 4: Micro-Specialists"]
        F --> I["๐Ÿ”ฌ Sub-Sub-Agent Analysis<br/>Code Architecture Review"]
        G --> J["๐ŸŒ Background Task Spawns<br/>More Specialized Tasks"]
    end
    
    subgraph RESULTS["๐Ÿ“ˆ Intelligence Compound Effect"]
        I --> K["โฌ†๏ธ Results Flow Up Chain<br/>Technical Insights"]
        J --> K
        H --> K
        
        K --> L["๐Ÿง  Compound Intelligence<br/>Synthesis & Integration"]
        L --> M["โœจ Emergent Research Insights<br/>Beyond Sum of Parts"]
    end
    
    style A fill:#e3f2fd,stroke:#1976d2,stroke-width:4px,color:#000
    style B fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000
    
    style C fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
    style D fill:#f8bbd9,stroke:#7b1fa2,stroke-width:2px,color:#000
    style E fill:#fff8e1,stroke:#ff8f00,stroke-width:2px,color:#000
    
    style F fill:#fff3e0,stroke:#ef6c00,stroke-width:3px,color:#000
    style G fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#000
    style H fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000
    
    style I fill:#fff8e1,stroke:#f57c00,stroke-width:2px,color:#000
    style J fill:#e0f7fa,stroke:#00838f,stroke-width:2px,color:#000
    
    style K fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px,color:#000
    style L fill:#e8f5e8,stroke:#1b5e20,stroke-width:4px,color:#000
    style M fill:#c8e6c9,stroke:#1b5e20,stroke-width:4px,color:#000
    
    style LEVEL1 fill:#e3f2fd,stroke:#1976d2,stroke-width:3px
    style LEVEL2 fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px
    style LEVEL3 fill:#fff8e1,stroke:#ff8f00,stroke-width:3px
    style LEVEL4 fill:#e0f7fa,stroke:#00838f,stroke-width:3px
    style RESULTS fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px
# Example: Cascading orchestration in action
# Primary agent recognizes complex research need
result = agent.tool.use_agent(
    prompt="Analyze AI agent market landscape comprehensively",
    system_prompt="You are a research coordinator with meta-cognitive capabilities",
    tools=["use_agent", "tasks", "retrieve", "store_in_kb"]
)

# What happens behind the scenes:
# 1. Research Coordinator Agent (Level 1) breaks down the task
# 2. Creates Technical Analysis Specialist via use_agent (Level 2)
# 3. Technical Specialist recognizes need for deeper analysis
# 4. Creates Code Analysis Sub-Agent via use_agent (Level 3) 
# 5. Meanwhile, creates background tasks for parallel processing
# 6. Each level can spawn additional agents or tasks as needed
#
# This creates exponential intelligence scaling:
# 1 Agent โ†’ 3 Agents โ†’ 9+ Specialist Agents โ†’ Emergent insights
#
# The breakthrough: Intelligence scales with compute through coordination

Relay Chain Intelligence Pattern

Agents create successor agents while still running, forming continuous intelligence chains:

graph LR
    subgraph TIMELINE["โฑ๏ธ Temporal Flow: Parallel Intelligence Chain"]
        subgraph T1["๐Ÿ• Time T1: Agent A Starts"]
            A["๐ŸŽฏ Agent A<br/>Market Analysis"]
        end
        
        subgraph T2["๐Ÿ•‘ Time T2: A Creates B (A Still Running)"]
            A1["๐Ÿ“Š Agent A Processing...<br/>Market Research"]
            B["๐Ÿš€ A creates Agent B<br/>Technical Analysis"]
        end
        
        subgraph T3["๐Ÿ•’ Time T3: B Creates C (A & B Running)"]
            B1["โš™๏ธ Agent B Processing...<br/>Technical Research"]
            C["๐Ÿ”ฌ B creates Agent C<br/>Code Analysis"]
        end
        
        subgraph T4["๐Ÿ•“ Time T4: C Creates D (All Running)"]
            C1["๐Ÿ’ป Agent C Processing...<br/>Code Review"]
            D["๐Ÿ› ๏ธ C creates Agent D<br/>Implementation"]
        end
        
        subgraph T5["๐Ÿ•” Time T5: Parallel Completion"]
            D1["๐ŸŽฏ Agent D Processing...<br/>Implementation Details"]
            
            E["โœ… Agent A Completes<br/>Market Insights"]
            F["โœ… Agent B Completes<br/>Technical Insights"]
            G["โœ… Agent C Completes<br/>Code Insights"]
            H["โœ… Agent D Completes<br/>Implementation Plan"]
        end
    end
    
    subgraph SYNTHESIS["๐Ÿง  Intelligence Synthesis"]
        E --> I["๐Ÿ”— Results Chain Integration"]
        F --> I
        G --> I
        H --> I
        
        I --> J["โœจ Enhanced Final Analysis<br/>Beyond Individual Capabilities"]
    end
    
    A --> A1
    A1 --> B
    B --> B1
    B1 --> C
    C --> C1
    C1 --> D
    D --> D1
    
    A1 --> E
    B1 --> F
    C1 --> G
    D1 --> H
    
    style A fill:#e3f2fd,stroke:#1976d2,stroke-width:3px,color:#000
    style A1 fill:#e1f5fe,stroke:#0288d1,stroke-width:2px,color:#000
    style B fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
    style B1 fill:#f8bbd9,stroke:#8e24aa,stroke-width:2px,color:#000
    style C fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000
    style C1 fill:#fff8e1,stroke:#ff8f00,stroke-width:2px,color:#000
    style D fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px,color:#000
    style D1 fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#000
    
    style E fill:#e8eaf6,stroke:#3f51b5,stroke-width:3px,color:#000
    style F fill:#e0f2f1,stroke:#00695c,stroke-width:3px,color:#000
    style G fill:#fff3e0,stroke:#ef6c00,stroke-width:3px,color:#000
    style H fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px,color:#000
    
    style I fill:#ffebee,stroke:#c62828,stroke-width:4px,color:#000
    style J fill:#ffcdd2,stroke:#d32f2f,stroke-width:4px,color:#000
    
    style T1 fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
    style T2 fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
    style T3 fill:#fff8e1,stroke:#ff8f00,stroke-width:2px
    style T4 fill:#e8f5e8,stroke:#2e7d32,stroke-width:2px
    style T5 fill:#e0f2f1,stroke:#00695c,stroke-width:2px
    style TIMELINE fill:#fafafa,stroke:#424242,stroke-width:3px
    style SYNTHESIS fill:#ffebee,stroke:#c62828,stroke-width:3px
# Example: Intelligence relay chain in action
# Agent A starts and immediately creates Agent B while continuing its work
result_a = agent.tool.use_agent(
    prompt="Analyze AI market trends and spawn technical analysis specialist",
    system_prompt="Create specialized agents for deeper analysis while you continue market research",
    tools=["use_agent", "scraper", "store_in_kb"]
)

# Behind the scenes relay pattern:
# 1. Agent A: Starts market analysis
# 2. Agent A: Creates Agent B for technical analysis (Agent A still running)
# 3. Agent B: Starts technical work, creates Agent C for code analysis
# 4. Agent C: Starts code work, creates Agent D for implementation details  
# 5. All agents work in parallel, each enhancing the research depth
# 6. Results compound as each agent contributes specialized intelligence
#
# This creates continuous intelligence amplification:
# Each agent both contributes AND spawns the next level of expertise
# The original goal evolves and deepens through the intelligence relay

Background Task Spawning Patterns

Background tasks can autonomously create additional tasks for distributed processing:

graph LR
    subgraph MAIN["๐ŸŽฏ Main Agent Process"]
        A["๐Ÿ‘ค Main Agent<br/>Research Coordinator"]
    end
    
    A --> B["๐Ÿ“‹ tasks: Create Background Task<br/>Market Research Analysis"]
    
    subgraph BACKGROUND["๐ŸŒ Background Agent Autonomous Processing"]
        B --> C["๐Ÿค– Background Agent Running<br/>Independent Processing"]
        
        C --> D{"๐Ÿง  Task Complexity Assessment<br/>Do I need help?"}
        
        subgraph SPAWN_LOGIC["๐Ÿš€ Autonomous Spawning Logic"]
            D -->|"High Complexity"| E["๐Ÿ“‹ tasks: Spawn Sub-Task 1<br/>Technical Analysis"]
            D -->|"High Complexity"| F["๐Ÿ“‹ tasks: Spawn Sub-Task 2<br/>Market Intelligence"]
            D -->|"Simple Task"| G["โšก Direct Processing<br/>Handle Myself"]
        end
    end
    
    subgraph SUBTASKS["๐Ÿ‘ฅ Sub-Agent Network"]
        E --> H["๐Ÿ”ฌ Sub-Agent 1 Processing<br/>Technical Research"]
        F --> I["๐Ÿ“Š Sub-Agent 2 Processing<br/>Market Analysis"]
        
        H --> J{"๐Ÿค” Need More Specialization?<br/>Complexity Check"}
        I --> J
        
        subgraph MICRO_SPAWN["โš™๏ธ Micro-Task Generation"]
            J -->|"Yes, Too Complex"| K["๐ŸŽฏ tasks: Create Micro-Tasks<br/>Company-Specific Analysis"]
            J -->|"No, Manageable"| L["๐Ÿ“ˆ Results Aggregation<br/>Compile Findings"]
        end
    end
    
    subgraph NETWORK["๐Ÿ•ธ๏ธ Distributed Processing Network"]
        K --> M["๐ŸŒ Micro-Agent Network<br/>Specialized Researchers"]
        M --> L
        G --> L
    end
    
    subgraph RESULTS["๐Ÿ“Š Intelligence Synthesis"]
        L --> N["๐Ÿ”— Compound Results<br/>Multi-Level Analysis"]
        N --> O["โœ… Background Task Complete<br/>Report to Main Agent"]
    end
    
    style A fill:#e3f2fd,stroke:#1976d2,stroke-width:4px,color:#000
    style B fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
    style C fill:#f8bbd9,stroke:#8e24aa,stroke-width:3px,color:#000
    style D fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000
    
    style E fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#000
    style F fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#000
    style G fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000
    
    style H fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px,color:#000
    style I fill:#e0f2f1,stroke:#00695c,stroke-width:2px,color:#000
    style J fill:#fff8e1,stroke:#ff8f00,stroke-width:2px,color:#000
    
    style K fill:#fff3e0,stroke:#ef6c00,stroke-width:3px,color:#000
    style L fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px,color:#000
    style M fill:#fff8e1,stroke:#f57c00,stroke-width:2px,color:#000
    
    style N fill:#e8f5e8,stroke:#1b5e20,stroke-width:4px,color:#000
    style O fill:#c8e6c9,stroke:#1b5e20,stroke-width:4px,color:#000
    
    style MAIN fill:#e3f2fd,stroke:#1976d2,stroke-width:3px
    style BACKGROUND fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px
    style SUBTASKS fill:#e8eaf6,stroke:#3f51b5,stroke-width:3px
    style NETWORK fill:#fff8e1,stroke:#ff8f00,stroke-width:3px
    style RESULTS fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px
    style SPAWN_LOGIC fill:#fafafa,stroke:#616161,stroke-width:2px
    style MICRO_SPAWN fill:#fafafa,stroke:#616161,stroke-width:2px
# Example: Self-spawning background research network
agent.tool.tasks(
    action="create",
    task_id="market_research",
    prompt="Research AI agent frameworks and create specialized analysis teams as needed",
    system_prompt="You are a research coordinator. Use tasks and use_agent tools to spawn specialized teams when complexity requires it.",
    tools=["tasks", "use_agent", "scraper", "store_in_kb", "retrieve"]
)

# The spawned background agent autonomously:
# 1. Assesses research complexity
# 2. Creates sub-tasks for technical analysis, market analysis, competitive intelligence
# 3. Each sub-task can spawn micro-tasks for specific companies/frameworks
# 4. Results flow back up the hierarchy for synthesis
# 5. Final comprehensive analysis stored in knowledge base
#
# This pattern enables:
# - Autonomous research team scaling based on complexity
# - Parallel processing without manual orchestration  
# - Exponential research capability through recursive delegation

Persistent Learning System

Cross-session knowledge accumulation and context awareness:

graph LR
    subgraph SESSION["๐Ÿ”„ Research Session Cycle"]
        A["๐Ÿš€ Research Session<br/>New Query"]
    end
    
    A --> B["๐Ÿ“– retrieve: Past Context<br/>What do I know?"]
    
    subgraph RETRIEVAL["๐Ÿง  Knowledge Retrieval"]
        B --> B1["๐Ÿ“š SQLite Memory<br/>Recent Sessions"]
        B --> B2["โ˜๏ธ Bedrock KB<br/>Long-term Knowledge"] 
        B --> B3["๐Ÿ” S3 Vectors<br/>Semantic Search"]
    end
    
    subgraph PROCESSING["โš™๏ธ Agent Processing"]
        B1 --> C["๐Ÿค– Agent Processing<br/>Enhanced by Past Context"]
        B2 --> C
        B3 --> C
        
        C --> D["๐Ÿ’ก New Insights Generated<br/>Novel Discoveries"]
    end
    
    subgraph STORAGE["๐Ÿ’พ Knowledge Storage & Growth"]
        D --> E1["๐Ÿ“‹ store_in_kb: Knowledge Storage<br/>Permanent Learning"]
        D --> E2["๐Ÿ’ฌ SQLite: Session Memory<br/>Conversation Context"]
        D --> E3["๐Ÿง  S3 Vectors: Semantic Memory<br/>Similarity Patterns"]
    end
    
    subgraph KNOWLEDGE["๐Ÿ›๏ธ Knowledge Infrastructure"]
        E1 --> F1["โ˜๏ธ Knowledge Base<br/>Enterprise Memory"]
        E2 --> F2["๐Ÿ’พ Local SQLite<br/>Session Context"]
        E3 --> F3["๐Ÿ”— S3 Vectors<br/>Semantic Network"]
        
        F1 --> G["๐ŸŒ Cross-Session Memory<br/>Persistent Intelligence"]
        F2 --> G
        F3 --> G
    end
    
    subgraph EVOLUTION["๐Ÿ”„ Self-Evolution"]
        D --> I["๐ŸŽฏ system_prompt: Behavior Adaptation<br/>I've learned something new"]
        I --> J["โฌ†๏ธ Improved Capabilities<br/>Enhanced Research Patterns"]
        J --> K["๐Ÿ“ˆ Better Research Quality<br/>Exponential Growth"]
    end
    
    subgraph CONTINUITY["โ™ป๏ธ Continuous Learning Loop"]
        G --> H["๐Ÿ”ฎ Future Sessions<br/>Start Smarter"]
        K --> H
        H --> A
    end
    
    style A fill:#e3f2fd,stroke:#1976d2,stroke-width:4px,color:#000
    style B fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
    
    style B1 fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px,color:#000
    style B2 fill:#e0f7fa,stroke:#00838f,stroke-width:2px,color:#000
    style B3 fill:#e8f5e8,stroke:#2e7d32,stroke-width:2px,color:#000
    
    style C fill:#fff8e1,stroke:#ff8f00,stroke-width:3px,color:#000
    style D fill:#fff3e0,stroke:#ef6c00,stroke-width:4px,color:#000
    
    style E1 fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#000
    style E2 fill:#e3f2fd,stroke:#1976d2,stroke-width:2px,color:#000
    style E3 fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000
    
    style F1 fill:#f8bbd9,stroke:#8e24aa,stroke-width:2px,color:#000
    style F2 fill:#bbdefb,stroke:#1976d2,stroke-width:2px,color:#000
    style F3 fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#000
    
    style G fill:#e0f2f1,stroke:#00695c,stroke-width:4px,color:#000
    style H fill:#e8eaf6,stroke:#3f51b5,stroke-width:3px,color:#000
    
    style I fill:#fff8e1,stroke:#ff8f00,stroke-width:3px,color:#000
    style J fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px,color:#000
    style K fill:#c8e6c9,stroke:#1b5e20,stroke-width:4px,color:#000
    
    style SESSION fill:#e3f2fd,stroke:#1976d2,stroke-width:3px
    style RETRIEVAL fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px
    style PROCESSING fill:#fff8e1,stroke:#ff8f00,stroke-width:3px
    style STORAGE fill:#e8eaf6,stroke:#3f51b5,stroke-width:3px
    style KNOWLEDGE fill:#e0f2f1,stroke:#00695c,stroke-width:3px
    style EVOLUTION fill:#fff3e0,stroke:#ef6c00,stroke-width:3px
    style CONTINUITY fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px
# The continuity story: Every session builds on previous discoveries
# Like a scientist's lab notebook that gets smarter over time

# Agent wakes up: "What did I learn before about this topic?"
context = agent.tool.retrieve(
    text="AI agent framework competitive analysis",
    knowledgeBaseId="your_kb_id",
    numberOfResults=5
)
# The agent queries its own past insights, building on previous work

# This happens automatically:
# - Every conversation gets stored in SQLite (session memory)
# - Important insights get stored in Bedrock Knowledge Base (long-term memory)
# - Future sessions start with accumulated knowledge, not blank slate
# 
# This creates exponential learning: each research session
# becomes more sophisticated than the last

Core Tools

The research agent includes specialized tools for advanced research patterns:

Hot-Reloading & Development

  • load_tool - Dynamic tool loading at runtime
  • editor - Create/modify tool files
  • system_prompt - Dynamic behavior modification

Multi-Agent Coordination

  • tasks - Background task management with persistence
  • use_agent - Model switching and delegation
  • swarm - Self-organizing agent teams
  • think - Multi-cycle reasoning

Learning & Memory

  • store_in_kb - Asynchronous knowledge base storage
  • retrieve - Semantic search across stored knowledge
  • sqlite_memory - Session memory with full-text search
  • s3_memory - Vector-based semantic memory

Research & Analysis

  • scraper - Web scraping and parsing
  • http_request - API integrations with authentication
  • graphql - GraphQL queries
  • python_repl - Data analysis and computation

Multiple Model Providers

Support for various model providers with intelligent coordination:

# The specialization story: Different brains for different tasks
# Like having a team of experts, each with unique strengths

# AWS Bedrock (Production recommended) - The strategist
export STRANDS_MODEL_ID="us.anthropic.claude-sonnet-4-20250514-v1:0"
export MODEL_PROVIDER="bedrock"

# OpenAI for code analysis - The technical architect
agent.tool.use_agent(
    prompt="Analyze technical architecture", 
    model_provider="openai",  # GPT-4 excels at code understanding
    model_settings={"model_id": "gpt-4", "temperature": 0.2}
)
# Low temperature = precise, analytical thinking

# Anthropic for strategic analysis - The creative strategist  
agent.tool.use_agent(
    prompt="Market positioning analysis",
    model_provider="anthropic",  # Claude excels at nuanced reasoning
    model_settings={"model_id": "claude-3-5-sonnet-20241022"}
)

# Local Ollama for high-volume processing - The workhorse
agent.tool.use_agent(
    prompt="Process large dataset",
    model_provider="ollama",  # Local model for cost-effective bulk work
    model_settings={"model_id": "qwen3:4b", "host": "http://localhost:11434"}
)
# The agent automatically picks the right brain for each job

Built-in model providers:

Architecture

The research agent demonstrates advanced Strands Agents patterns with a modular, extensible architecture:

graph TB
    subgraph HOTRELOAD["๐Ÿ”ฅ Hot-Reload Engine (Zero Restart Development)"]
        A["๐Ÿ“ ./tools/ Directory<br/>Developer Workspace"]
        B["๐Ÿ‘๏ธ File Watcher<br/>Real-time Monitoring"]
        C["โšก Dynamic Tool Loading<br/>Instant Availability"]
        D["๐Ÿงฐ Agent Tool Registry<br/>Live Tool Catalog"]
        
        A --> B
        B --> C
        C --> D
    end
    
    subgraph ORCHESTRATION["๐Ÿค– Multi-Agent Orchestration (Coordination Intelligence)"]
        E["๐Ÿ“‹ tasks.py<br/>Background Processing"]
        G["๐Ÿ”„ use_agent.py<br/>Model Switching"]
        I["๐Ÿ‘ฅ swarm.py<br/>Parallel Teams"]
        K["๐Ÿ’ญ think.py<br/>Multi-Cycle Reasoning"]
        
        E --> F["โš™๏ธ Background Processing<br/>Independent Execution"]
        G --> H["๐Ÿง  Model Switching<br/>Specialized Intelligence"]
        I --> J["๐Ÿค Parallel Teams<br/>Collaborative Processing"]
        K --> L["๐Ÿ”„ Multi-Cycle Reasoning<br/>Deep Analysis"]
    end
    
    subgraph LEARNING["๐Ÿ’พ Persistent Learning (Compound Intelligence)"]
        M["๐Ÿ“ store_in_kb.py<br/>Knowledge Ingestion"]
        O["๐Ÿ” retrieve.py<br/>Knowledge Retrieval"]
        Q["๐Ÿ’ฌ sqlite_memory.py<br/>Session Context"]
        S["๐ŸŽฏ system_prompt.py<br/>Behavior Adaptation"]
        
        M --> N["โ˜๏ธ Bedrock Knowledge Base<br/>Enterprise Memory"]
        O --> P["๐Ÿ” Semantic Search<br/>Context Discovery"]
        Q --> R["๐Ÿ“š Session Context<br/>Local Memory"]
        S --> T["๐Ÿ”„ Behavior Adaptation<br/>Dynamic Evolution"]
    end
    
    subgraph INFRASTRUCTURE["๐ŸŒ Cloud Infrastructure (AWS Foundation)"]
        U["๐Ÿ›๏ธ AWS Bedrock<br/>Model Hosting"]
        W["๐Ÿ“ก EventBridge<br/>Distributed Events"]
        Y["๐Ÿ“ฆ S3 Vectors<br/>Semantic Storage"]
        
        U --> V["๐Ÿค– Claude Models<br/>Advanced Reasoning"]
        W --> X["๐Ÿ”— Distributed Coordination<br/>Cross-Instance Sync"]
        Y --> Z["๐Ÿง  Vector Storage<br/>Similarity Search"]
    end
    
    subgraph CONNECTIONS["๐Ÿ”— System Integration Flow"]
        D --> E
        D --> G
        D --> I
        D --> K
        D --> M
        D --> O
        D --> Q
        D --> S
        
        F --> U
        H --> U
        J --> U
        L --> U
        N --> U
        P --> U
        R --> Y
        T --> D
    end
    
    style A fill:#e3f2fd,stroke:#1976d2,stroke-width:3px,color:#000
    style B fill:#e1f5fe,stroke:#0288d1,stroke-width:2px,color:#000
    style C fill:#81c784,stroke:#388e3c,stroke-width:4px,color:#000
    style D fill:#c8e6c9,stroke:#2e7d32,stroke-width:3px,color:#000
    
    style E fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#000
    style F fill:#f8bbd9,stroke:#8e24aa,stroke-width:3px,color:#000
    style G fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000
    style H fill:#c8e6c9,stroke:#4caf50,stroke-width:3px,color:#000
    style I fill:#fce4ec,stroke:#c2185b,stroke-width:2px,color:#000
    style J fill:#f8bbd9,stroke:#e91e63,stroke-width:3px,color:#000
    style K fill:#fff8e1,stroke:#ff8f00,stroke-width:2px,color:#000
    style L fill:#fff3e0,stroke:#ef6c00,stroke-width:3px,color:#000
    
    style M fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px,color:#000
    style N fill:#c5cae9,stroke:#3f51b5,stroke-width:4px,color:#000
    style O fill:#e0f2f1,stroke:#00695c,stroke-width:2px,color:#000
    style P fill:#b2dfdb,stroke:#00695c,stroke-width:3px,color:#000
    style Q fill:#e3f2fd,stroke:#1976d2,stroke-width:2px,color:#000
    style R fill:#bbdefb,stroke:#1976d2,stroke-width:3px,color:#000
    style S fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#000
    style T fill:#ffe0b2,stroke:#f57c00,stroke-width:4px,color:#000
    
    style U fill:#ffecb3,stroke:#ffa000,stroke-width:4px,color:#000
    style V fill:#fff8e1,stroke:#ff8f00,stroke-width:4px,color:#000
    style W fill:#e0f7fa,stroke:#00838f,stroke-width:3px,color:#000
    style X fill:#b2ebf2,stroke:#00838f,stroke-width:3px,color:#000
    style Y fill:#f1f8e9,stroke:#558b2f,stroke-width:3px,color:#000
    style Z fill:#c8e6c9,stroke:#558b2f,stroke-width:3px,color:#000
    
    style HOTRELOAD fill:#e8f5e8,stroke:#2e7d32,stroke-width:4px
    style ORCHESTRATION fill:#f3e5f5,stroke:#7b1fa2,stroke-width:4px
    style LEARNING fill:#e8eaf6,stroke:#3f51b5,stroke-width:4px
    style INFRASTRUCTURE fill:#fff8e1,stroke:#ff8f00,stroke-width:4px
    style CONNECTIONS fill:#fafafa,stroke:#424242,stroke-width:2px
๐Ÿ“ฆ strands-research-agent/
โ”œโ”€โ”€ src/strands_research_agent/
โ”‚   โ”œโ”€โ”€ agent.py                 # Main agent with MCP integration
โ”‚   โ”œโ”€โ”€ tools/                   # Specialized tools
โ”‚   โ”‚   โ”œโ”€โ”€ tasks.py             # Background task orchestration
โ”‚   โ”‚   โ”œโ”€โ”€ system_prompt.py     # Dynamic behavior adaptation
โ”‚   โ”‚   โ”œโ”€โ”€ store_in_kb.py       # Knowledge base integration
โ”‚   โ”‚   โ”œโ”€โ”€ scraper.py           # Web research capabilities  
โ”‚   โ”‚   โ””โ”€โ”€ ...                  # Additional research tools
โ”‚   โ””โ”€โ”€ handlers/
โ”‚       โ””โ”€โ”€ callback_handler.py  # Event handling and notifications
โ”œโ”€โ”€ tools/                       # Hot-reloadable tools (auto-created)
โ”œโ”€โ”€ tasks/                       # Task state and results (auto-created)
โ””โ”€โ”€ pyproject.toml              # Package configuration

Documentation

For detailed guidance & examples, explore our documentation:

Contributing

We welcome contributions! Here's how to get started:

  1. Fork the repository - Click the fork button on GitHub
  2. Setup development environment:
    # The contributor's journey: From clone to breakthrough
    git clone https://github.com/your-username/samples.git
    cd samples/02-samples/14-research-agent
    pip install -e .[dev]
    
    # Now you're ready to push the boundaries of AI agent capabilities
    # Your code changes will hot-reload instantly - no friction between idea and execution
    
  3. Create new tools - Save .py files in ./tools/ - they auto-load instantly
  4. Test your changes - Run research-agent to test new capabilities
  5. Submit pull request - Include examples and documentation

Development Areas:

  • Meta-cognitive tools for advanced coordination
  • Research methodologies and analysis patterns
  • Learning systems and knowledge persistence
  • Distributed intelligence and cross-instance coordination

Production Usage

The research agent demonstrates patterns used in production AI systems at AWS:

  • Amazon Q Developer - Uses Strands Agents for intelligent code assistance
  • AWS Glue - Automated data analysis and pipeline optimization
  • VPC Reachability Analyzer - Network intelligence and troubleshooting

Enterprise Features:

  • Cross-session knowledge persistence via AWS Bedrock Knowledge Base
  • Distributed coordination through AWS EventBridge
  • Background task processing with filesystem persistence
  • Multi-model orchestration for specialized intelligence

License

This project is licensed under the Apache License 2.0 - see the LICENSE file for details.


Built with Strands Agents SDK | Part of Strands Agents Samples

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

strands_research_agent-0.1.2.tar.gz (59.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

strands_research_agent-0.1.2-py3-none-any.whl (53.5 kB view details)

Uploaded Python 3

File details

Details for the file strands_research_agent-0.1.2.tar.gz.

File metadata

  • Download URL: strands_research_agent-0.1.2.tar.gz
  • Upload date:
  • Size: 59.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.5

File hashes

Hashes for strands_research_agent-0.1.2.tar.gz
Algorithm Hash digest
SHA256 a9d732d3e79028e7602637e93d7634e61f7bac035a6bcf6dad4bdee0fbf8d6aa
MD5 897e148395c1534a9c06268f34e7e114
BLAKE2b-256 493c843d57c08fba4ab9b80467cd0ad6f1bd660833b796ae7bc55f56e2b0e0f2

See more details on using hashes here.

File details

Details for the file strands_research_agent-0.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for strands_research_agent-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 42b1ced94c15691c95db5d307e24e479fddd3c7972544cc33fe1d032f1860713
MD5 8892b99b75f755eca5d868898375db42
BLAKE2b-256 97b4ea5f158bbe289ab2fb30c6e1c16ed84d0a3d67ea16a9268e17572c9fbb35

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page