Skip to main content

The AI-ready robotics dev kit, with built-in remote control and action models support.

Project description

phosphobot

A community-driven platform for robotics enthusiasts to share and explore creative projects built with the phospho starter pack.

phosphobot Python package on PyPi Y Combinator W24 phospho discord

Overview

This repository contains demo code and community projects developed using the phospho starter pack. Whether you're a beginner or an experienced developer, you can explore existing projects or contribute your own creations.

Getting started

  1. Get Your Dev Kit: Purchase your Phospho starter pack at robots.phospho.ai. Unbox it and set it up following the instructions in the box.

  2. Install the phosphobot server and run it:

# Install it this way
curl -fsSL https://raw.githubusercontent.com/phospho-app/phosphobot/main/install.sh | bash
# Start it this way
phosphobot run
# Upgrade it with brew or with apt
# sudo apt update && sudo apt install phosphobot
# brew update && brew upgrade phosphobot
  1. Use the phosphobot python client to interact with the phosphobot server API.
pip install --upgrade phosphobot

We release new versions very often.

How to train ACT with LeRobot?

  1. Record a Dataset with phosphobot: Record a dataset using the app. Do the same gesture 30-50 times (depending on the task complexity) to create a dataset. Learn more

  2. Install LeRobot. LeRobot by HuggingFace is a research-oriented library for AI training which is still a work in progress. We made a few workarounds to make sure it works reliably. On MacOS, here is a step by step guide.

2.1. Install uv, a Python environment manager.

# On macOS and Linux.
curl -LsSf https://astral.sh/uv/install.sh | sh

2.2 Create a new directory and install requirements.

mkdir my_model
cd my_model
uv init
uv add phosphobot git+https://github.com/phospho-app/lerobot
git clone https://github.com/phospho-app/lerobot

2.3 On MacOS M1, you need to set this variable for torchcodec to work.

export DYLD_LIBRARY_PATH="/opt/homebrew/lib:/usr/local/lib:$DYLD_LIBRARY_PATH"

2.4 Run the LeRobot training script. For example, on Mac M1:

uv run lerobot/lerobot/scripts/train.py \
 --dataset.repo_id=PLB/simple-lego-pickup-mono-2 \
 --policy.type=act \
 --output_dir=outputs/train/phoshobot_test \
 --job_name=phosphobot_test \
 --policy.device=mps

Change the dataset.repo_id to the id of your dataset on Hugging Face.

Change the --policy.device flag based on your hardware: cuda if you have an NVIDIA GPU, mps if you use a MacBook Pro Sillicon, and cpu otherwise.

  1. Use the ACT model to control your robot:

3.1 Launch the ACT server to run inference. This should be running on a beefy GPU machine. Check out our folder [/inference] for more details.

curl -o server.py https://raw.githubusercontent.com/phospho-app/phosphobot/refs/heads/main/inference/ACT/server.py
uv run server.py --model_id LegrandFrederic/Orange-brick-in-black-box # Replace with <YOUR_HF_MODEL_ID>

3.2 Make sure the phosphobot server is running to control your robot:

# Install it this way
curl -fsSL https://raw.githubusercontent.com/phospho-app/phosphobot/main/install.sh | bash
# Start it this way
phosphobot run

3.3 Create a script called my_model/client.py and copy paste the content below.

# /// script
# requires-python = ">=3.11"
# dependencies = [
#     "phosphobot",
# ]
# ///
from phosphobot.camera import AllCameras
from phosphobot.api.client import PhosphoApi
from phosphobot.am import ACT

import time
import numpy as np

# Connect to the phosphobot server
client = PhosphoApi(base_url="http://localhost:80")

# Get a camera frame
allcameras = AllCameras()

# Need to wait for the cameras to initialize
time.sleep(1)

# Instantiate the model
model = ACT()

while True:
    images = [
        allcameras.get_rgb_frame(camera_id=0, resize=(240, 320)),
        allcameras.get_rgb_frame(camera_id=1, resize=(240, 320)),
        allcameras.get_rgb_frame(camera_id=2, resize=(240, 320)),
    ]

    # Get the robot state
    state = client.control.read_joints()

    inputs = {"state": np.array(state.angles_rad), "images": np.array(images)}

    # Go through the model
    actions = model(inputs)

    for action in actions:
        # Send the new joint postion to the robot
        client.control.write_joints(angles=action.tolist())
        # Wait to respect frequency control (30 Hz)
        time.sleep(1 / 30)

3.4 Run this script to control your robot using the model:

uv run my_model/client.py

For the full detailed instructions and other model (Pi0, OpenVLA,...), refer to the docs.

Join the Community

Connect with other developers and share your experience in our Discord community

Community Projects

Explore projects created by our community members in the code_examples directory. Each project includes its own documentation and setup instructions.

Support

License

MIT License


Made with 💚 by the Phospho community

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

phosphobot-0.0.16.tar.gz (10.6 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

phosphobot-0.0.16-py3-none-any.whl (10.7 MB view details)

Uploaded Python 3

File details

Details for the file phosphobot-0.0.16.tar.gz.

File metadata

  • Download URL: phosphobot-0.0.16.tar.gz
  • Upload date:
  • Size: 10.6 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.6.14

File hashes

Hashes for phosphobot-0.0.16.tar.gz
Algorithm Hash digest
SHA256 ae7aaa206eefff4f38a53a6c091d25b784bcdcacd0c9d29c897daf5de3ab4076
MD5 74458e79c5a0b538c893a85c6a750752
BLAKE2b-256 ffac513b6aad0d107125f0b949c18da2d997135d5d5f3d0781a846ffec8d3e4f

See more details on using hashes here.

File details

Details for the file phosphobot-0.0.16-py3-none-any.whl.

File metadata

File hashes

Hashes for phosphobot-0.0.16-py3-none-any.whl
Algorithm Hash digest
SHA256 c50e24be3be81538f5de98e0510c649b191c85120aa182508a1a0286076d6381
MD5 ff7fea1d794a8d04f957859051dc2836
BLAKE2b-256 a14f2f51307a0a5afb0ae2ab1acb2638c385abf7bf9249c4d869201a71c84954

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page