Sample Research Agent powered by Strands Agents SDK
Project description
Strands Research Agent
Autonomous research agent demonstrating advanced Strands Agents patterns including hot-reloading tools, multi-agent coordination, and persistent learning systems for enterprise research automation.
Feature Overview
- Hot-Reloading Development: Create and modify tools without restarting - save
.pyfiles in./tools/for instant availability - Multi-Agent Orchestration: Background tasks, parallel processing, and model coordination across different providers
- Persistent Learning: Cross-session knowledge accumulation via AWS Bedrock Knowledge Base and SQLite memory
- Self-Modifying Systems: Dynamic behavior adaptation through the
system_prompttool and continuous improvement loops
graph LR
subgraph TRADITIONAL["โ Traditional Development (Minutes/Hours)"]
A["๐ง Modify Tool"] --> B["๐ Restart Agent"]
B --> C["๐งช Test Change"]
C --> D["๐ Debug Issues"]
D --> A
end
subgraph HOTRELOAD["โ
Hot-Reload Development (Seconds)"]
E["๐พ Save .py to ./tools/"] --> F["โก Instant Loading"]
F --> G["๐ Agent Uses Tool"]
G --> H["๐ฌ Refine & Test"]
H --> E
end
TRADITIONAL -.->|"Strands Research Agent"| HOTRELOAD
style A fill:#ffcdd2,stroke:#d32f2f,stroke-width:2px,color:#000
style B fill:#ffcdd2,stroke:#d32f2f,stroke-width:2px,color:#000
style C fill:#ffcdd2,stroke:#d32f2f,stroke-width:2px,color:#000
style D fill:#ffcdd2,stroke:#d32f2f,stroke-width:2px,color:#000
style E fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#000
style F fill:#81c784,stroke:#388e3c,stroke-width:3px,color:#000
style G fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#000
style H fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#000
style TRADITIONAL fill:#ffebee,stroke:#d32f2f,stroke-width:2px
style HOTRELOAD fill:#e8f5e8,stroke:#388e3c,stroke-width:2px
Quick Start
# Install the research agent
pip install strands-research-agent[all]
# Configure your model (Bedrock recommended)
export STRANDS_MODEL_ID="us.anthropic.claude-sonnet-4-20250514-v1:0"
export MODEL_PROVIDER="bedrock"
# Start interactive research
research-agent
# Agent creates its own tools and uses them immediately
agent("Create tools for competitive intelligence analysis and start researching AI agent frameworks")
# What happens behind the scenes:
# 1. Agent recognizes it needs specialized capabilities
# 2. Creates competitive_intel.py in ./tools/ (hot-loaded instantly)
# 3. Tool becomes available as agent.tool.competitive_intel()
# 4. Agent begins research using its newly created tool
# 5. Stores findings in knowledge base for future sessions
#
# This is tool creation at the speed of thought - no restart, no manual coding
Installation
Ensure you have Python 3.10+ installed, then:
# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows use: .venv\Scripts\activate
# Install from PyPI
pip install strands-research-agent[all]
# Or clone for development
git clone https://github.com/strands-agents/samples.git
cd samples/02-samples/14-research-agent
pip install -e .[dev]
Configuration:
# Core configuration
export STRANDS_MODEL_ID="us.anthropic.claude-sonnet-4-20250514-v1:0"
export MODEL_PROVIDER="bedrock"
# Optional - for persistent learning
export STRANDS_KNOWLEDGE_BASE_ID="your_kb_id"
export AWS_REGION="us-west-2"
๐ Recommended Settings for Optimal Performance:
# Maximum performance settings for production research workloads
export STRANDS_MODEL_ID="us.anthropic.claude-sonnet-4-20250514-v1:0"
export STRANDS_ADDITIONAL_REQUEST_FIELDS='{"anthropic_beta": ["interleaved-thinking-2025-05-14", "context-1m-2025-08-07"], "thinking": {"type": "enabled", "budget_tokens": 2048}}'
export STRANDS_MAX_TOKENS="65536"
What these settings provide:
- Enhanced Model: Claude 4 Sonnet with latest capabilities
- Interleaved Thinking: Real-time reasoning during responses for better analysis
- Extended Context: 1M token context window for complex research sessions
- Thinking Budget: 2048 tokens for advanced reasoning cycles
- Maximum Output: 65536 tokens for comprehensive research reports
Note: For the default Amazon Bedrock provider, you'll need AWS credentials configured and model access enabled for Claude 4 Sonnet in the us-west-2 region.
Features at a Glance
Hot-Reloading Tool Development
Automatically create and load tools from the ./tools/ directory:
# ./tools/competitive_intel.py
from strands import tool
@tool
def competitive_intel(company: str, domain: str = "ai-agents") -> dict:
"""Gather competitive intelligence on companies in specific domains.
This docstring is used by the LLM to understand the tool's purpose.
"""
# Tool implementation here - the agent wrote this code itself
return {"status": "success", "analysis": f"Intelligence for {company} in {domain}"}
# The breakthrough: Save this file and it's instantly available
# No imports, no registration, no restart needed
# Just save โ agent.tool.competitive_intel() exists immediately
#
# Traditional AI: Fixed capabilities, human-coded tools
# Research Agent: Self-expanding capabilities, AI-created tools
Multi-Agent Task Orchestration
Create background tasks with different models and specialized capabilities:
graph TD
A["๐ฏ Research Query"] --> B{"๐ง Complexity Assessment"}
B -->|"Simple"| C["โก Direct Processing"]
B -->|"Complex"| D["๐ Multi-Agent Coordination"]
subgraph COORDINATION["๐ค Coordination Strategies"]
D --> E["๐ tasks: Background Processing"]
D --> F["๐ use_agent: Model Switching"]
D --> G["๐ฅ swarm: Parallel Teams"]
D --> H["๐ญ think: Multi-Cycle Reasoning"]
end
subgraph SPECIALISTS["๐ Specialist Agents"]
E --> I["๐ Market Research Agent"]
F --> J["โ๏ธ Technical Analysis Agent"]
G --> K["๐ฌ Specialist Team A"]
G --> L["๐ ๏ธ Specialist Team B"]
H --> M["๐ง Deep Reasoning Cycles"]
end
I --> N["๐ Coordinated Results"]
J --> N
K --> N
L --> N
M --> N
N --> O["๐ก Knowledge Integration"]
O --> P["๐ system_prompt: Self-Adaptation"]
style A fill:#e3f2fd,stroke:#1976d2,stroke-width:3px,color:#000
style B fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#000
style C fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000
style D fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
style E fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#000
style F fill:#f1f8e9,stroke:#558b2f,stroke-width:2px,color:#000
style G fill:#fce4ec,stroke:#c2185b,stroke-width:2px,color:#000
style H fill:#fff8e1,stroke:#ff8f00,stroke-width:2px,color:#000
style I fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px,color:#000
style J fill:#e0f2f1,stroke:#00695c,stroke-width:2px,color:#000
style K fill:#fce4ec,stroke:#ad1457,stroke-width:2px,color:#000
style L fill:#fce4ec,stroke:#ad1457,stroke-width:2px,color:#000
style M fill:#fff3e0,stroke:#ef6c00,stroke-width:2px,color:#000
style N fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px,color:#000
style O fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000
style P fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
style COORDINATION fill:#fafafa,stroke:#424242,stroke-width:2px
style SPECIALISTS fill:#f5f5f5,stroke:#616161,stroke-width:2px
from strands_research_agent.agent import create_agent
agent, mcp_client = create_agent()
with mcp_client:
# The orchestration story: One brain, multiple specialists
# Think of it like a research team where the lead researcher
# (main agent) coordinates different experts working in parallel
# Expert 1: Market Research Specialist (background task)
agent.tool.tasks(
action="create",
task_id="market_research",
prompt="Research AI agent market trends and competitive landscape",
system_prompt="You are a market research analyst specializing in AI technologies.",
tools=["scraper", "http_request", "store_in_kb"]
)
# This agent works independently, reports back when done
# Expert 2: Technical Architect (different model, specialized brain)
technical_analysis = agent.tool.use_agent(
prompt="Analyze technical capabilities of top 5 AI agent frameworks",
system_prompt="You are a senior software architect",
model_provider="openai", # Different AI model = different thinking style
model_settings={"model_id": "gpt-4", "temperature": 0.2}
)
# Lower temperature = more analytical, precise thinking
# The coordination: Experts share knowledge
agent.tool.tasks(
action="add_message",
task_id="market_research",
message="Integrate technical analysis findings into market research"
)
# Knowledge flows between specialists, compound intelligence emerges
Dynamic Self-Modification
The agent can modify its own behavior during runtime:
# The evolution story: Agent learns and adapts its personality
# Like a researcher who gets better at research through experience
# Agent reflects: "I've learned something important about competitive analysis"
agent.tool.system_prompt(
action="update",
prompt="You are now a competitive intelligence specialist with deep knowledge of AI agent frameworks. Focus on technical differentiation and market positioning."
)
# The agent literally rewrites its own identity based on expertise gained
# The memory formation: Insights become institutional knowledge
agent.tool.store_in_kb(
content="Key findings from competitive analysis research session...",
title="AI Agent Framework Analysis - Q4 2024"
)
# Today's breakthrough becomes tomorrow's context
# This is how AI systems develop expertise over time
Meta-Agent Cascading Orchestration
The research agent demonstrates unique emergent intelligence patterns through recursive meta-tool usage:
graph TD
subgraph LEVEL1["๐๏ธ LEVEL 1: Primary Agent"]
A["๐ฏ Primary Agent<br/>Research Coordinator"]
end
A --> B{"๐ Complex Research Task<br/>Assessment"}
subgraph LEVEL2["๐๏ธ LEVEL 2: Sub-Agents"]
B --> C["๐ค use_agent: Create Sub-Agent<br/>Market Analyst"]
C --> D["๐ Sub-Agent Processing<br/>Market Analysis"]
end
D --> E{"๐๏ธ Sub-Task Complexity?<br/>Need Deeper Analysis"}
subgraph LEVEL3["๐๏ธ LEVEL 3: Sub-Sub-Agents"]
E -->|"High Complexity"| F["๐ค use_agent: Create Sub-Sub-Agent<br/>Technical Specialist"]
E -->|"Medium Complexity"| G["๐ tasks: Background Processing<br/>Data Collection"]
E -->|"Simple Tasks"| H["โก Direct Processing<br/>Basic Analysis"]
end
subgraph LEVEL4["๐๏ธ LEVEL 4: Micro-Specialists"]
F --> I["๐ฌ Sub-Sub-Agent Analysis<br/>Code Architecture Review"]
G --> J["๐ Background Task Spawns<br/>More Specialized Tasks"]
end
subgraph RESULTS["๐ Intelligence Compound Effect"]
I --> K["โฌ๏ธ Results Flow Up Chain<br/>Technical Insights"]
J --> K
H --> K
K --> L["๐ง Compound Intelligence<br/>Synthesis & Integration"]
L --> M["โจ Emergent Research Insights<br/>Beyond Sum of Parts"]
end
style A fill:#e3f2fd,stroke:#1976d2,stroke-width:4px,color:#000
style B fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000
style C fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
style D fill:#f8bbd9,stroke:#7b1fa2,stroke-width:2px,color:#000
style E fill:#fff8e1,stroke:#ff8f00,stroke-width:2px,color:#000
style F fill:#fff3e0,stroke:#ef6c00,stroke-width:3px,color:#000
style G fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#000
style H fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000
style I fill:#fff8e1,stroke:#f57c00,stroke-width:2px,color:#000
style J fill:#e0f7fa,stroke:#00838f,stroke-width:2px,color:#000
style K fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px,color:#000
style L fill:#e8f5e8,stroke:#1b5e20,stroke-width:4px,color:#000
style M fill:#c8e6c9,stroke:#1b5e20,stroke-width:4px,color:#000
style LEVEL1 fill:#e3f2fd,stroke:#1976d2,stroke-width:3px
style LEVEL2 fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px
style LEVEL3 fill:#fff8e1,stroke:#ff8f00,stroke-width:3px
style LEVEL4 fill:#e0f7fa,stroke:#00838f,stroke-width:3px
style RESULTS fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px
# Example: Cascading orchestration in action
# Primary agent recognizes complex research need
result = agent.tool.use_agent(
prompt="Analyze AI agent market landscape comprehensively",
system_prompt="You are a research coordinator with meta-cognitive capabilities",
tools=["use_agent", "tasks", "retrieve", "store_in_kb"]
)
# What happens behind the scenes:
# 1. Research Coordinator Agent (Level 1) breaks down the task
# 2. Creates Technical Analysis Specialist via use_agent (Level 2)
# 3. Technical Specialist recognizes need for deeper analysis
# 4. Creates Code Analysis Sub-Agent via use_agent (Level 3)
# 5. Meanwhile, creates background tasks for parallel processing
# 6. Each level can spawn additional agents or tasks as needed
#
# This creates exponential intelligence scaling:
# 1 Agent โ 3 Agents โ 9+ Specialist Agents โ Emergent insights
#
# The breakthrough: Intelligence scales with compute through coordination
Relay Chain Intelligence Pattern
Agents create successor agents while still running, forming continuous intelligence chains:
graph LR
subgraph TIMELINE["โฑ๏ธ Temporal Flow: Parallel Intelligence Chain"]
subgraph T1["๐ Time T1: Agent A Starts"]
A["๐ฏ Agent A<br/>Market Analysis"]
end
subgraph T2["๐ Time T2: A Creates B (A Still Running)"]
A1["๐ Agent A Processing...<br/>Market Research"]
B["๐ A creates Agent B<br/>Technical Analysis"]
end
subgraph T3["๐ Time T3: B Creates C (A & B Running)"]
B1["โ๏ธ Agent B Processing...<br/>Technical Research"]
C["๐ฌ B creates Agent C<br/>Code Analysis"]
end
subgraph T4["๐ Time T4: C Creates D (All Running)"]
C1["๐ป Agent C Processing...<br/>Code Review"]
D["๐ ๏ธ C creates Agent D<br/>Implementation"]
end
subgraph T5["๐ Time T5: Parallel Completion"]
D1["๐ฏ Agent D Processing...<br/>Implementation Details"]
E["โ
Agent A Completes<br/>Market Insights"]
F["โ
Agent B Completes<br/>Technical Insights"]
G["โ
Agent C Completes<br/>Code Insights"]
H["โ
Agent D Completes<br/>Implementation Plan"]
end
end
subgraph SYNTHESIS["๐ง Intelligence Synthesis"]
E --> I["๐ Results Chain Integration"]
F --> I
G --> I
H --> I
I --> J["โจ Enhanced Final Analysis<br/>Beyond Individual Capabilities"]
end
A --> A1
A1 --> B
B --> B1
B1 --> C
C --> C1
C1 --> D
D --> D1
A1 --> E
B1 --> F
C1 --> G
D1 --> H
style A fill:#e3f2fd,stroke:#1976d2,stroke-width:3px,color:#000
style A1 fill:#e1f5fe,stroke:#0288d1,stroke-width:2px,color:#000
style B fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
style B1 fill:#f8bbd9,stroke:#8e24aa,stroke-width:2px,color:#000
style C fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000
style C1 fill:#fff8e1,stroke:#ff8f00,stroke-width:2px,color:#000
style D fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px,color:#000
style D1 fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#000
style E fill:#e8eaf6,stroke:#3f51b5,stroke-width:3px,color:#000
style F fill:#e0f2f1,stroke:#00695c,stroke-width:3px,color:#000
style G fill:#fff3e0,stroke:#ef6c00,stroke-width:3px,color:#000
style H fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px,color:#000
style I fill:#ffebee,stroke:#c62828,stroke-width:4px,color:#000
style J fill:#ffcdd2,stroke:#d32f2f,stroke-width:4px,color:#000
style T1 fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px
style T2 fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px
style T3 fill:#fff8e1,stroke:#ff8f00,stroke-width:2px
style T4 fill:#e8f5e8,stroke:#2e7d32,stroke-width:2px
style T5 fill:#e0f2f1,stroke:#00695c,stroke-width:2px
style TIMELINE fill:#fafafa,stroke:#424242,stroke-width:3px
style SYNTHESIS fill:#ffebee,stroke:#c62828,stroke-width:3px
# Example: Intelligence relay chain in action
# Agent A starts and immediately creates Agent B while continuing its work
result_a = agent.tool.use_agent(
prompt="Analyze AI market trends and spawn technical analysis specialist",
system_prompt="Create specialized agents for deeper analysis while you continue market research",
tools=["use_agent", "scraper", "store_in_kb"]
)
# Behind the scenes relay pattern:
# 1. Agent A: Starts market analysis
# 2. Agent A: Creates Agent B for technical analysis (Agent A still running)
# 3. Agent B: Starts technical work, creates Agent C for code analysis
# 4. Agent C: Starts code work, creates Agent D for implementation details
# 5. All agents work in parallel, each enhancing the research depth
# 6. Results compound as each agent contributes specialized intelligence
#
# This creates continuous intelligence amplification:
# Each agent both contributes AND spawns the next level of expertise
# The original goal evolves and deepens through the intelligence relay
Background Task Spawning Patterns
Background tasks can autonomously create additional tasks for distributed processing:
graph LR
subgraph MAIN["๐ฏ Main Agent Process"]
A["๐ค Main Agent<br/>Research Coordinator"]
end
A --> B["๐ tasks: Create Background Task<br/>Market Research Analysis"]
subgraph BACKGROUND["๐ Background Agent Autonomous Processing"]
B --> C["๐ค Background Agent Running<br/>Independent Processing"]
C --> D{"๐ง Task Complexity Assessment<br/>Do I need help?"}
subgraph SPAWN_LOGIC["๐ Autonomous Spawning Logic"]
D -->|"High Complexity"| E["๐ tasks: Spawn Sub-Task 1<br/>Technical Analysis"]
D -->|"High Complexity"| F["๐ tasks: Spawn Sub-Task 2<br/>Market Intelligence"]
D -->|"Simple Task"| G["โก Direct Processing<br/>Handle Myself"]
end
end
subgraph SUBTASKS["๐ฅ Sub-Agent Network"]
E --> H["๐ฌ Sub-Agent 1 Processing<br/>Technical Research"]
F --> I["๐ Sub-Agent 2 Processing<br/>Market Analysis"]
H --> J{"๐ค Need More Specialization?<br/>Complexity Check"}
I --> J
subgraph MICRO_SPAWN["โ๏ธ Micro-Task Generation"]
J -->|"Yes, Too Complex"| K["๐ฏ tasks: Create Micro-Tasks<br/>Company-Specific Analysis"]
J -->|"No, Manageable"| L["๐ Results Aggregation<br/>Compile Findings"]
end
end
subgraph NETWORK["๐ธ๏ธ Distributed Processing Network"]
K --> M["๐ Micro-Agent Network<br/>Specialized Researchers"]
M --> L
G --> L
end
subgraph RESULTS["๐ Intelligence Synthesis"]
L --> N["๐ Compound Results<br/>Multi-Level Analysis"]
N --> O["โ
Background Task Complete<br/>Report to Main Agent"]
end
style A fill:#e3f2fd,stroke:#1976d2,stroke-width:4px,color:#000
style B fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
style C fill:#f8bbd9,stroke:#8e24aa,stroke-width:3px,color:#000
style D fill:#fff3e0,stroke:#f57c00,stroke-width:3px,color:#000
style E fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#000
style F fill:#e1f5fe,stroke:#0277bd,stroke-width:2px,color:#000
style G fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000
style H fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px,color:#000
style I fill:#e0f2f1,stroke:#00695c,stroke-width:2px,color:#000
style J fill:#fff8e1,stroke:#ff8f00,stroke-width:2px,color:#000
style K fill:#fff3e0,stroke:#ef6c00,stroke-width:3px,color:#000
style L fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px,color:#000
style M fill:#fff8e1,stroke:#f57c00,stroke-width:2px,color:#000
style N fill:#e8f5e8,stroke:#1b5e20,stroke-width:4px,color:#000
style O fill:#c8e6c9,stroke:#1b5e20,stroke-width:4px,color:#000
style MAIN fill:#e3f2fd,stroke:#1976d2,stroke-width:3px
style BACKGROUND fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px
style SUBTASKS fill:#e8eaf6,stroke:#3f51b5,stroke-width:3px
style NETWORK fill:#fff8e1,stroke:#ff8f00,stroke-width:3px
style RESULTS fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px
style SPAWN_LOGIC fill:#fafafa,stroke:#616161,stroke-width:2px
style MICRO_SPAWN fill:#fafafa,stroke:#616161,stroke-width:2px
# Example: Self-spawning background research network
agent.tool.tasks(
action="create",
task_id="market_research",
prompt="Research AI agent frameworks and create specialized analysis teams as needed",
system_prompt="You are a research coordinator. Use tasks and use_agent tools to spawn specialized teams when complexity requires it.",
tools=["tasks", "use_agent", "scraper", "store_in_kb", "retrieve"]
)
# The spawned background agent autonomously:
# 1. Assesses research complexity
# 2. Creates sub-tasks for technical analysis, market analysis, competitive intelligence
# 3. Each sub-task can spawn micro-tasks for specific companies/frameworks
# 4. Results flow back up the hierarchy for synthesis
# 5. Final comprehensive analysis stored in knowledge base
#
# This pattern enables:
# - Autonomous research team scaling based on complexity
# - Parallel processing without manual orchestration
# - Exponential research capability through recursive delegation
Persistent Learning System
Cross-session knowledge accumulation and context awareness:
graph LR
subgraph SESSION["๐ Research Session Cycle"]
A["๐ Research Session<br/>New Query"]
end
A --> B["๐ retrieve: Past Context<br/>What do I know?"]
subgraph RETRIEVAL["๐ง Knowledge Retrieval"]
B --> B1["๐ SQLite Memory<br/>Recent Sessions"]
B --> B2["โ๏ธ Bedrock KB<br/>Long-term Knowledge"]
B --> B3["๐ S3 Vectors<br/>Semantic Search"]
end
subgraph PROCESSING["โ๏ธ Agent Processing"]
B1 --> C["๐ค Agent Processing<br/>Enhanced by Past Context"]
B2 --> C
B3 --> C
C --> D["๐ก New Insights Generated<br/>Novel Discoveries"]
end
subgraph STORAGE["๐พ Knowledge Storage & Growth"]
D --> E1["๐ store_in_kb: Knowledge Storage<br/>Permanent Learning"]
D --> E2["๐ฌ SQLite: Session Memory<br/>Conversation Context"]
D --> E3["๐ง S3 Vectors: Semantic Memory<br/>Similarity Patterns"]
end
subgraph KNOWLEDGE["๐๏ธ Knowledge Infrastructure"]
E1 --> F1["โ๏ธ Knowledge Base<br/>Enterprise Memory"]
E2 --> F2["๐พ Local SQLite<br/>Session Context"]
E3 --> F3["๐ S3 Vectors<br/>Semantic Network"]
F1 --> G["๐ Cross-Session Memory<br/>Persistent Intelligence"]
F2 --> G
F3 --> G
end
subgraph EVOLUTION["๐ Self-Evolution"]
D --> I["๐ฏ system_prompt: Behavior Adaptation<br/>I've learned something new"]
I --> J["โฌ๏ธ Improved Capabilities<br/>Enhanced Research Patterns"]
J --> K["๐ Better Research Quality<br/>Exponential Growth"]
end
subgraph CONTINUITY["โป๏ธ Continuous Learning Loop"]
G --> H["๐ฎ Future Sessions<br/>Start Smarter"]
K --> H
H --> A
end
style A fill:#e3f2fd,stroke:#1976d2,stroke-width:4px,color:#000
style B fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px,color:#000
style B1 fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px,color:#000
style B2 fill:#e0f7fa,stroke:#00838f,stroke-width:2px,color:#000
style B3 fill:#e8f5e8,stroke:#2e7d32,stroke-width:2px,color:#000
style C fill:#fff8e1,stroke:#ff8f00,stroke-width:3px,color:#000
style D fill:#fff3e0,stroke:#ef6c00,stroke-width:4px,color:#000
style E1 fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#000
style E2 fill:#e3f2fd,stroke:#1976d2,stroke-width:2px,color:#000
style E3 fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000
style F1 fill:#f8bbd9,stroke:#8e24aa,stroke-width:2px,color:#000
style F2 fill:#bbdefb,stroke:#1976d2,stroke-width:2px,color:#000
style F3 fill:#c8e6c9,stroke:#388e3c,stroke-width:2px,color:#000
style G fill:#e0f2f1,stroke:#00695c,stroke-width:4px,color:#000
style H fill:#e8eaf6,stroke:#3f51b5,stroke-width:3px,color:#000
style I fill:#fff8e1,stroke:#ff8f00,stroke-width:3px,color:#000
style J fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px,color:#000
style K fill:#c8e6c9,stroke:#1b5e20,stroke-width:4px,color:#000
style SESSION fill:#e3f2fd,stroke:#1976d2,stroke-width:3px
style RETRIEVAL fill:#f3e5f5,stroke:#7b1fa2,stroke-width:3px
style PROCESSING fill:#fff8e1,stroke:#ff8f00,stroke-width:3px
style STORAGE fill:#e8eaf6,stroke:#3f51b5,stroke-width:3px
style KNOWLEDGE fill:#e0f2f1,stroke:#00695c,stroke-width:3px
style EVOLUTION fill:#fff3e0,stroke:#ef6c00,stroke-width:3px
style CONTINUITY fill:#e8f5e8,stroke:#2e7d32,stroke-width:3px
# The continuity story: Every session builds on previous discoveries
# Like a scientist's lab notebook that gets smarter over time
# Agent wakes up: "What did I learn before about this topic?"
context = agent.tool.retrieve(
text="AI agent framework competitive analysis",
knowledgeBaseId="your_kb_id",
numberOfResults=5
)
# The agent queries its own past insights, building on previous work
# This happens automatically:
# - Every conversation gets stored in SQLite (session memory)
# - Important insights get stored in Bedrock Knowledge Base (long-term memory)
# - Future sessions start with accumulated knowledge, not blank slate
#
# This creates exponential learning: each research session
# becomes more sophisticated than the last
Core Tools
The research agent includes specialized tools for advanced research patterns:
Hot-Reloading & Development
load_tool- Dynamic tool loading at runtimeeditor- Create/modify tool filessystem_prompt- Dynamic behavior modification
Multi-Agent Coordination
tasks- Background task management with persistenceuse_agent- Model switching and delegationswarm- Self-organizing agent teamsthink- Multi-cycle reasoning
Learning & Memory
store_in_kb- Asynchronous knowledge base storageretrieve- Semantic search across stored knowledgesqlite_memory- Session memory with full-text searchs3_memory- Vector-based semantic memory
Research & Analysis
scraper- Web scraping and parsinghttp_request- API integrations with authenticationgraphql- GraphQL queriespython_repl- Data analysis and computation
Multiple Model Providers
Support for various model providers with intelligent coordination:
# The specialization story: Different brains for different tasks
# Like having a team of experts, each with unique strengths
# AWS Bedrock (Production recommended) - The strategist
export STRANDS_MODEL_ID="us.anthropic.claude-sonnet-4-20250514-v1:0"
export MODEL_PROVIDER="bedrock"
# OpenAI for code analysis - The technical architect
agent.tool.use_agent(
prompt="Analyze technical architecture",
model_provider="openai", # GPT-4 excels at code understanding
model_settings={"model_id": "gpt-4", "temperature": 0.2}
)
# Low temperature = precise, analytical thinking
# Anthropic for strategic analysis - The creative strategist
agent.tool.use_agent(
prompt="Market positioning analysis",
model_provider="anthropic", # Claude excels at nuanced reasoning
model_settings={"model_id": "claude-3-5-sonnet-20241022"}
)
# Local Ollama for high-volume processing - The workhorse
agent.tool.use_agent(
prompt="Process large dataset",
model_provider="ollama", # Local model for cost-effective bulk work
model_settings={"model_id": "qwen3:4b", "host": "http://localhost:11434"}
)
# The agent automatically picks the right brain for each job
Built-in model providers:
- Amazon Bedrock (Recommended for production)
- Anthropic
- OpenAI
- Ollama (Local models)
- LiteLLM (Multi-provider proxy)
Architecture
The research agent demonstrates advanced Strands Agents patterns with a modular, extensible architecture:
graph TB
subgraph HOTRELOAD["๐ฅ Hot-Reload Engine (Zero Restart Development)"]
A["๐ ./tools/ Directory<br/>Developer Workspace"]
B["๐๏ธ File Watcher<br/>Real-time Monitoring"]
C["โก Dynamic Tool Loading<br/>Instant Availability"]
D["๐งฐ Agent Tool Registry<br/>Live Tool Catalog"]
A --> B
B --> C
C --> D
end
subgraph ORCHESTRATION["๐ค Multi-Agent Orchestration (Coordination Intelligence)"]
E["๐ tasks.py<br/>Background Processing"]
G["๐ use_agent.py<br/>Model Switching"]
I["๐ฅ swarm.py<br/>Parallel Teams"]
K["๐ญ think.py<br/>Multi-Cycle Reasoning"]
E --> F["โ๏ธ Background Processing<br/>Independent Execution"]
G --> H["๐ง Model Switching<br/>Specialized Intelligence"]
I --> J["๐ค Parallel Teams<br/>Collaborative Processing"]
K --> L["๐ Multi-Cycle Reasoning<br/>Deep Analysis"]
end
subgraph LEARNING["๐พ Persistent Learning (Compound Intelligence)"]
M["๐ store_in_kb.py<br/>Knowledge Ingestion"]
O["๐ retrieve.py<br/>Knowledge Retrieval"]
Q["๐ฌ sqlite_memory.py<br/>Session Context"]
S["๐ฏ system_prompt.py<br/>Behavior Adaptation"]
M --> N["โ๏ธ Bedrock Knowledge Base<br/>Enterprise Memory"]
O --> P["๐ Semantic Search<br/>Context Discovery"]
Q --> R["๐ Session Context<br/>Local Memory"]
S --> T["๐ Behavior Adaptation<br/>Dynamic Evolution"]
end
subgraph INFRASTRUCTURE["๐ Cloud Infrastructure (AWS Foundation)"]
U["๐๏ธ AWS Bedrock<br/>Model Hosting"]
W["๐ก EventBridge<br/>Distributed Events"]
Y["๐ฆ S3 Vectors<br/>Semantic Storage"]
U --> V["๐ค Claude Models<br/>Advanced Reasoning"]
W --> X["๐ Distributed Coordination<br/>Cross-Instance Sync"]
Y --> Z["๐ง Vector Storage<br/>Similarity Search"]
end
subgraph CONNECTIONS["๐ System Integration Flow"]
D --> E
D --> G
D --> I
D --> K
D --> M
D --> O
D --> Q
D --> S
F --> U
H --> U
J --> U
L --> U
N --> U
P --> U
R --> Y
T --> D
end
style A fill:#e3f2fd,stroke:#1976d2,stroke-width:3px,color:#000
style B fill:#e1f5fe,stroke:#0288d1,stroke-width:2px,color:#000
style C fill:#81c784,stroke:#388e3c,stroke-width:4px,color:#000
style D fill:#c8e6c9,stroke:#2e7d32,stroke-width:3px,color:#000
style E fill:#f3e5f5,stroke:#7b1fa2,stroke-width:2px,color:#000
style F fill:#f8bbd9,stroke:#8e24aa,stroke-width:3px,color:#000
style G fill:#e8f5e8,stroke:#388e3c,stroke-width:2px,color:#000
style H fill:#c8e6c9,stroke:#4caf50,stroke-width:3px,color:#000
style I fill:#fce4ec,stroke:#c2185b,stroke-width:2px,color:#000
style J fill:#f8bbd9,stroke:#e91e63,stroke-width:3px,color:#000
style K fill:#fff8e1,stroke:#ff8f00,stroke-width:2px,color:#000
style L fill:#fff3e0,stroke:#ef6c00,stroke-width:3px,color:#000
style M fill:#e8eaf6,stroke:#3f51b5,stroke-width:2px,color:#000
style N fill:#c5cae9,stroke:#3f51b5,stroke-width:4px,color:#000
style O fill:#e0f2f1,stroke:#00695c,stroke-width:2px,color:#000
style P fill:#b2dfdb,stroke:#00695c,stroke-width:3px,color:#000
style Q fill:#e3f2fd,stroke:#1976d2,stroke-width:2px,color:#000
style R fill:#bbdefb,stroke:#1976d2,stroke-width:3px,color:#000
style S fill:#fff3e0,stroke:#f57c00,stroke-width:2px,color:#000
style T fill:#ffe0b2,stroke:#f57c00,stroke-width:4px,color:#000
style U fill:#ffecb3,stroke:#ffa000,stroke-width:4px,color:#000
style V fill:#fff8e1,stroke:#ff8f00,stroke-width:4px,color:#000
style W fill:#e0f7fa,stroke:#00838f,stroke-width:3px,color:#000
style X fill:#b2ebf2,stroke:#00838f,stroke-width:3px,color:#000
style Y fill:#f1f8e9,stroke:#558b2f,stroke-width:3px,color:#000
style Z fill:#c8e6c9,stroke:#558b2f,stroke-width:3px,color:#000
style HOTRELOAD fill:#e8f5e8,stroke:#2e7d32,stroke-width:4px
style ORCHESTRATION fill:#f3e5f5,stroke:#7b1fa2,stroke-width:4px
style LEARNING fill:#e8eaf6,stroke:#3f51b5,stroke-width:4px
style INFRASTRUCTURE fill:#fff8e1,stroke:#ff8f00,stroke-width:4px
style CONNECTIONS fill:#fafafa,stroke:#424242,stroke-width:2px
๐ฆ strands-research-agent/
โโโ src/strands_research_agent/
โ โโโ agent.py # Main agent with MCP integration
โ โโโ tools/ # Specialized tools
โ โ โโโ tasks.py # Background task orchestration
โ โ โโโ system_prompt.py # Dynamic behavior adaptation
โ โ โโโ store_in_kb.py # Knowledge base integration
โ โ โโโ scraper.py # Web research capabilities
โ โ โโโ ... # Additional research tools
โ โโโ handlers/
โ โโโ callback_handler.py # Event handling and notifications
โโโ tools/ # Hot-reloadable tools (auto-created)
โโโ tasks/ # Task state and results (auto-created)
โโโ pyproject.toml # Package configuration
Documentation
For detailed guidance & examples, explore our documentation:
- Strands Agents Documentation - Core framework and concepts
- Strands Agents 1.0 Release - Multi-agent orchestration foundations
- Original SDK Introduction - The vision and architecture
- Production Deployment Guide - Enterprise deployment patterns
Contributing
We welcome contributions! Here's how to get started:
- Fork the repository - Click the fork button on GitHub
- Setup development environment:
# The contributor's journey: From clone to breakthrough git clone https://github.com/your-username/samples.git cd samples/02-samples/14-research-agent pip install -e .[dev] # Now you're ready to push the boundaries of AI agent capabilities # Your code changes will hot-reload instantly - no friction between idea and execution
- Create new tools - Save
.pyfiles in./tools/- they auto-load instantly - Test your changes - Run
research-agentto test new capabilities - Submit pull request - Include examples and documentation
Development Areas:
- Meta-cognitive tools for advanced coordination
- Research methodologies and analysis patterns
- Learning systems and knowledge persistence
- Distributed intelligence and cross-instance coordination
Production Usage
The research agent demonstrates patterns used in production AI systems at AWS:
- Amazon Q Developer - Uses Strands Agents for intelligent code assistance
- AWS Glue - Automated data analysis and pipeline optimization
- VPC Reachability Analyzer - Network intelligence and troubleshooting
Enterprise Features:
- Cross-session knowledge persistence via AWS Bedrock Knowledge Base
- Distributed coordination through AWS EventBridge
- Background task processing with filesystem persistence
- Multi-model orchestration for specialized intelligence
License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Built with Strands Agents SDK | Part of Strands Agents Samples
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file strands_research_agent-0.1.1.tar.gz.
File metadata
- Download URL: strands_research_agent-0.1.1.tar.gz
- Upload date:
- Size: 107.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
977d2d981a3146a1cb84eda2e0ff503e2ce3bec69158136c8512c850840185e0
|
|
| MD5 |
9079c753ee3be2a2588db47fe863ae7f
|
|
| BLAKE2b-256 |
3955bc393d44d5419a96033d2dcdd0d540820d927254737ba5ab894875282805
|
File details
Details for the file strands_research_agent-0.1.1-py3-none-any.whl.
File metadata
- Download URL: strands_research_agent-0.1.1-py3-none-any.whl
- Upload date:
- Size: 107.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
08f82a2f4feb7ecb7b8ffe9db48e24b7224cc36a090e3abb10ab56f3fc76b4da
|
|
| MD5 |
d52ecad30f8f3a00e551048f7e787ba2
|
|
| BLAKE2b-256 |
b6cd6ade7ec4108890cbccaf92d08ed6cbbca98812d81fca8fd2e7e917ff6ffb
|