Generic tool for Boston Dynamics Spot SDK for Strands Agents
Project description
Strands Spot
Boston Dynamics Spot Control for Strands Agents
A Python tool for controlling Boston Dynamics Spot robots through the Strands Agents framework.
Call Spot SDK services and methods directly without writing boilerplate connection code:
use_spot(service="robot_command", method="stand")
use_spot(service="image", method="get_image_from_sources")
use_spot(service="power", method="power_on")
Works with natural language when used with Strands agents:
agent = Agent(tools=[use_spot])
agent("Make Spot stand up and take a picture")
How It Works
You specify which Spot SDK service and method to call. The tool handles connections, authentication, and lease management:
use_spot(
service="robot_command", # Which SDK service
method="stand", # Which method to call
params={} # Method parameters
)
This maps to Spot SDK: service="robot_command" → RobotCommandClient, method="stand" → synchro_stand_command()
Installation
pip install strands-spot
Dependencies:
pip install bosdyn-client bosdyn-api bosdyn-core # Required
pip install strands-agents # Optional, for natural language
Environment setup:
export SPOT_HOSTNAME="192.168.80.3"
export SPOT_USERNAME="admin"
export SPOT_PASSWORD="password"
Usage
from strands_spot import use_spot
# Stand the robot
use_spot(
hostname="192.168.80.3",
service="robot_command",
method="stand"
)
# Capture image
use_spot(
hostname="192.168.80.3",
service="image",
method="get_image_from_sources",
params={"image_sources": ["frontleft_fisheye_image"]}
)
# With natural language
from strands import Agent
agent = Agent(tools=[use_spot])
agent("Make Spot stand and take a picture")
Available Services
| Service | Client | Common Methods | Description |
|---|---|---|---|
| robot_command | RobotCommandClient | stand, sit, velocity_command, self_right |
Motion control and poses |
| robot_state | RobotStateClient | get_robot_state, get_robot_metrics, get_robot_hardware_configuration |
Query robot status |
| power | PowerClient | power_on, power_off, power_cycle_robot |
Power management |
| image | ImageClient | list_image_sources, get_image_from_sources |
Camera capture |
| graph_nav | GraphNavClient | navigate_to, upload_graph, set_localization |
Autonomous navigation |
| manipulation | ManipulationApiClient | manipulation_api_command, grasp_override |
Arm control |
| docking | DockingClient | docking_command, get_docking_config |
Charging station docking |
| lease | LeaseClient | acquire, release, list_leases |
Resource management |
| estop | EstopClient | register, deregister, set_status |
Emergency stop |
| time_sync | TimeSyncClient | get_robot_time_range, update |
Clock synchronization |
| directory | DirectoryClient | list, get_entry |
Service discovery |
| choreography | ChoreographyClient | execute_choreography, list_all_moves |
Dance routines |
| data_acquisition | DataAcquisitionClient | acquire_data, list_capture_actions |
Sensor data collection |
| autowalk | AutowalkClient | load_autowalk, compile_autowalk |
Mission playback |
| spot_check | SpotCheckClient | start_spot_check, spot_check_feedback |
Robot diagnostics |
💡 100+ Methods Available
Each service exposes 5-20 methods. Examples:
robot_command (Motion Control):
stand,sit,self_right,safe_power_offvelocity_command,trajectory_commandarm_stow,arm_ready,gripper_commandstance_command,follow_arm_command
robot_state (Status Queries):
get_robot_state,get_robot_metricsget_robot_hardware_configurationget_hardware_status_streaming
image (Vision):
list_image_sources,get_image_from_sourcesbuild_image_request,decode_image
graph_nav (Navigation):
navigate_to,navigate_route,navigate_to_anchorupload_graph,download_graphset_localization,get_localization_stateclear_graph,get_status
See Spot SDK Python Client Reference for complete method documentation.
Features
Automatic lease management - Leases are acquired and released automatically
Vision model integration - Captured images are formatted for LLM vision models to analyze
All SDK parameters exposed - Full access to Spot SDK without limitations
# Images captured from Spot can be analyzed by vision models
agent = Agent(tools=[use_spot])
agent("Take a picture and describe what you see")
Examples
Complete Workflow: Power → Stand → Capture → Sit → Power Off
from strands_spot import use_spot
hostname = "192.168.80.3"
username = "admin"
password = "password"
# 1. Power on motors
use_spot(
hostname=hostname, username=username, password=password,
service="power", method="power_on", params={"timeout_sec": 20}
)
# 2. Stand up
use_spot(
hostname=hostname, username=username, password=password,
service="robot_command", method="stand", params={}
)
# 3. Capture image from front-left camera
result = use_spot(
hostname=hostname, username=username, password=password,
service="image", method="get_image_from_sources",
params={"image_sources": ["frontleft_fisheye_image"]}
)
# 4. Sit down
use_spot(
hostname=hostname, username=username, password=password,
service="robot_command", method="sit", params={}
)
# 5. Power off
use_spot(
hostname=hostname, username=username, password=password,
service="power", method="power_off",
params={"cut_immediately": False, "timeout_sec": 20}
)
Velocity Control (Walking)
import time
# Walk forward at 0.5 m/s
use_spot(
service="robot_command",
method="velocity_command",
params={"v_x": 0.5, "v_y": 0.0, "v_rot": 0.0}
)
time.sleep(3) # Walk for 3 seconds
# Turn in place at 0.3 rad/s
use_spot(
service="robot_command",
method="velocity_command",
params={"v_x": 0.0, "v_y": 0.0, "v_rot": 0.3}
)
time.sleep(2) # Turn for 2 seconds
# Stop
use_spot(
service="robot_command",
method="velocity_command",
params={"v_x": 0.0, "v_y": 0.0, "v_rot": 0.0}
)
Arm Manipulation
# Unstow the arm
use_spot(service="robot_command", method="arm_ready", params={})
# Move arm to position (Cartesian command)
use_spot(
service="manipulation",
method="manipulation_api_command",
params={
"arm_cartesian_command": {
"pose": {
"position": {"x": 0.8, "y": 0.0, "z": 0.25},
"rotation": {"w": 1, "x": 0, "y": 0, "z": 0}
}
}
}
)
# Close gripper
use_spot(
service="robot_command",
method="gripper_command",
params={
"claw_gripper_command": {
"trajectory": {"position": 0.0} # 0.0 = fully closed
}
}
)
# Stow the arm
use_spot(service="robot_command", method="arm_stow", params={})
Multi-Camera Capture
# List all available cameras
result = use_spot(service="image", method="list_image_sources", params={})
print(result["content"][1]["json"]["response_data"]["image_sources"])
# Capture from multiple cameras simultaneously
result = use_spot(
service="image",
method="get_image_from_sources",
params={
"image_sources": [
"frontleft_fisheye_image",
"frontright_fisheye_image",
"hand_color_image"
]
}
)
# Images are automatically formatted for LLM consumption!
# The response content contains image blocks that the LLM can "see"
# Content structure:
# [
# {"text": "✅ Executed image.get_image_from_sources - captured 3 image(s)"},
# {"image": {"format": "jpeg", "source": {"bytes": <image1_bytes>}}},
# {"image": {"format": "jpeg", "source": {"bytes": <image2_bytes>}}},
# {"image": {"format": "jpeg", "source": {"bytes": <image3_bytes>}}},
# {"json": {"response_data": {...}}},
# {"json": {"metadata": {...}}}
# ]
# To save images manually (optional):
for i, content_block in enumerate(result["content"]):
if "image" in content_block:
image_bytes = content_block["image"]["source"]["bytes"]
with open(f"spot_image_{i}.jpg", "wb") as f:
f.write(image_bytes)
print(f"Saved spot_image_{i}.jpg")
Natural Language Control
from strands import Agent
from strands_spot import use_spot
agent = Agent(tools=[use_spot])
# Agent interprets and executes
agent("""
Connect to Spot robot at 192.168.80.3 with admin credentials.
First, check the robot's battery level.
If battery is above 20%, make the robot stand up and wave.
Then capture images from all cameras and sit back down.
""")
# The agent will break this into atomic use_spot calls:
# 1. use_spot(service="robot_state", method="get_robot_state", ...)
# 2. use_spot(service="robot_command", method="stand", ...)
# 3. use_spot(service="robot_command", method="arm_ready", ...)
# 4. use_spot(service="image", method="get_image_from_sources", ...)
# 5. use_spot(service="robot_command", method="sit", ...)
Vision-Enabled Agent (LLM Can See!)
from strands import Agent
from strands_spot import use_spot
agent = Agent(tools=[use_spot])
# The agent can capture AND analyze images
response = agent("""
Connect to Spot at 192.168.80.3.
Take a picture from the front-left camera and tell me:
1. What objects do you see?
2. Is the path ahead clear?
3. Are there any obstacles?
""")
# Behind the scenes:
# 1. Agent calls use_spot(service="image", method="get_image_from_sources")
# 2. Tool returns image in LLM-readable format
# 3. Agent's vision model analyzes the image
# 4. Agent provides natural language response with image analysis
print(response)
# Output: "I can see a hallway with clear flooring. There's a door on the left
# and some office furniture on the right. The path ahead is clear with
# no obstacles detected within 5 meters."
Real-World Scenario: Autonomous Inspection
agent = Agent(tools=[use_spot])
# Complex multi-step task with vision
agent("""
Using Spot robot at 192.168.80.3:
1. Stand up and check battery level
2. Walk forward 2 meters
3. Capture images from all 5 cameras
4. Analyze the images and report:
- Any equipment damage visible
- Temperature gauge readings (if visible)
- Safety hazards
5. Walk back 2 meters
6. Sit down and power off
Report your findings in a structured format.
""")
# The agent autonomously:
# - Plans the sequence of SDK calls
# - Executes motion commands
# - Captures multiple camera views
# - Analyzes visual data with vision models
# - Generates comprehensive inspection report
Safety
Before operating:
- Clear 3m around robot
- Keep E-stop accessible
- Verify Spot firmware compatibility (SDK 5.0+)
During operation:
# Check robot state before commands
state = use_spot(service="robot_state", method="get_robot_state")
# Use timeouts to prevent hanging
use_spot(service="power", method="power_on", params={"timeout_sec": 20})
# Emergency stop
use_spot(service="robot_command", method="velocity_command",
params={"v_x": 0.0, "v_y": 0.0, "v_rot": 0.0})
License
Apache-2.0
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file strands_spot-0.1.0.tar.gz.
File metadata
- Download URL: strands_spot-0.1.0.tar.gz
- Upload date:
- Size: 19.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9652292b23ba11e54b07a74543b3aae1132fd9d46f5c4caad7c40a10503cc402
|
|
| MD5 |
7b541fb8ceaadaa3417f5dd1d6235a6d
|
|
| BLAKE2b-256 |
25140ac5e89660310579637fa4944fdde5006165f83aca4e74f443a72d286b1b
|
File details
Details for the file strands_spot-0.1.0-py3-none-any.whl.
File metadata
- Download URL: strands_spot-0.1.0-py3-none-any.whl
- Upload date:
- Size: 14.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.9
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
414f057b9ffb1d154f1b36ccd0ae04366074dce80e89032f3b8106d4ba956a4a
|
|
| MD5 |
5d76b4e8d377a7c9245e06ff48d52b23
|
|
| BLAKE2b-256 |
8d04cab221f49a75668454d39369b7e581af2d70a359d20be3d29fbc0ff13a92
|