Ravens is a collection of simulated tasks in PyBullet for learning vision-based robotic manipulation.
Project description
Ravens - Transporter Networks
Ravens is a collection of simulated tasks in PyBullet for learning vision-based robotic manipulation, with emphasis on pick and place. It features a Gym-like API with 10 tabletop rearrangement tasks, each with (i) a scripted oracle that provides expert demonstrations (for imitation learning), and (ii) reward functions that provide partial credit (for reinforcement learning).
(a) block-insertion: pick up the L-shaped red block and place it into the L-shaped fixture.
(b) place-red-in-green: pick up the red blocks and place them into the green bowls amidst other objects.
(c) towers-of-hanoi: sequentially move disks from one tower to another—only smaller disks can be on top of larger ones.
(d) align-box-corner: pick up the randomly sized box and align one of its corners to the L-shaped marker on the tabletop.
(e) stack-block-pyramid: sequentially stack 6 blocks into a pyramid of 3-2-1 with rainbow colored ordering.
(f) palletizing-boxes: pick up homogeneous fixed-sized boxes and stack them in transposed layers on the pallet.
(g) assembling-kits: pick up different objects and arrange them on a board marked with corresponding silhouettes.
(h) packing-boxes: pick up randomly sized boxes and place them tightly into a container.
(i) manipulating-rope: rearrange a deformable rope such that it connects the two endpoints of a 3-sided square.
(j) sweeping-piles: push piles of small objects into a target goal zone marked on the tabletop.
Some tasks require generalizing to unseen objects (d,g,h), or multi-step sequencing with closed-loop feedback (c,e,f,h,i,j).
Team: this repository is developed and maintained by Andy Zeng, Pete Florence, Daniel Seita, Jonathan Tompson, and Ayzaan Wahid. This is the reference repository for the paper:
Transporter Networks: Rearranging the Visual World for Robotic Manipulation
Project Website • PDF • Conference on Robot Learning (CoRL) 2020
Andy Zeng, Pete Florence, Jonathan Tompson, Stefan Welker, Jonathan Chien, Maria Attarian, Travis Armstrong,
Ivan Krasin, Dan Duong, Vikas Sindhwani, Johnny Lee
Abstract. Robotic manipulation can be formulated as inducing a sequence of spatial displacements: where the space being moved can encompass an object, part of an object, or end effector. In this work, we propose the Transporter Network, a simple model architecture that rearranges deep features to infer spatial displacements from visual input—which can parameterize robot actions. It makes no assumptions of objectness (e.g. canonical poses, models, or keypoints), it exploits spatial symmetries, and is orders of magnitude more sample efficient than our benchmarked alternatives in learning vision-based manipulation tasks: from stacking a pyramid of blocks, to assembling kits with unseen objects; from manipulating deformable ropes, to pushing piles of small objects with closed-loop feedback. Our method can represent complex multi-modal policy distributions and generalizes to multi-step sequential tasks, as well as 6DoF pick-and-place. Experiments on 10 simulated tasks show that it learns faster and generalizes better than a variety of end-to-end baselines, including policies that use ground-truth object poses. We validate our methods with hardware in the real world.
Installation
Step 1. Recommended: install Miniconda with Python 3.7.
curl -O https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh -b -u
echo $'\nexport PATH=~/miniconda3/bin:"${PATH}"\n' >> ~/.profile # Add Conda to PATH.
source ~/.profile
conda init
Step 2. Create and activate Conda environment, then install GCC and Python packages.
cd ~/ravens
conda create --name ravens python=3.7 -y
conda activate ravens
sudo apt-get update
sudo apt-get -y install gcc libgl1-mesa-dev
pip install -r requirements.txt
python setup.py install --user
Step 3. Recommended: install GPU acceleration with NVIDIA CUDA 10.1 and cuDNN 7.6.5 for Tensorflow.
./oss_scipts/install_cuda.sh # For Ubuntu 16.04 and 18.04.
conda install cudatoolkit==10.1.243 -y
conda install cudnn==7.6.5 -y
Getting Started
Step 1. Generate training and testing data. Note: remove --disp
for headless mode.
python ravens/demos.py --assets_root=./ravens/environments/assets/ --disp=True --task=block-insertion --mode=train --n=10
python ravens/demos.py --assets_root=./ravens/environments/assets/ --disp=True --task=block-insertion --mode=test --n=100
Step 2. Train a model e.g., Transporter Networks model.
python ravens/train.py - --assets_root=./ravens/environments/assets/ -task=block-insertion --agent=transporter --n_demos=10
Step 3. Evaluate a Transporter Networks agent with the trained model.
python ravens/test.py --assets_root=./ravens/environments/assets/ --disp=True --task=block-insertion --agent=transporter --n_demos=10 --n_steps=1000
Step 4. Plot results.
python ravens/plot.py --assets_root=./ravens/environments/assets/ -disp=True --task=block-insertion --agent=transporter --n_demos=10
Optional. Track training and validation losses with Tensorboard.
python -m tensorboard.main --logdir=logs # Open the browser to where it tells you to.
Datasets and Pre-Trained Models
Download our generated train and test datasets and pre-trained models.
wget https://storage.googleapis.com/ravens-assets/checkpoints.zip
wget https://storage.googleapis.com/ravens-assets/block-insertion.zip
wget https://storage.googleapis.com/ravens-assets/place-red-in-green.zip
wget https://storage.googleapis.com/ravens-assets/towers-of-hanoi.zip
wget https://storage.googleapis.com/ravens-assets/align-box-corner.zip
wget https://storage.googleapis.com/ravens-assets/stack-block-pyramid.zip
wget https://storage.googleapis.com/ravens-assets/palletizing-boxes.zip
wget https://storage.googleapis.com/ravens-assets/assembling-kits.zip
wget https://storage.googleapis.com/ravens-assets/packing-boxes.zip
wget https://storage.googleapis.com/ravens-assets/manipulating-rope.zip
wget https://storage.googleapis.com/ravens-assets/sweeping-piles.zip
The MDP formulation for each task uses transitions with the following structure:
Observations: raw RGB-D images and camera parameters (pose and intrinsics).
Actions: a primitive function (to be called by the robot) and parameters.
Rewards: total sum of rewards for a successful episode should be =1.
Info: 6D poses, sizes, and colors of objects.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file ravens-0.1.tar.gz
.
File metadata
- Download URL: ravens-0.1.tar.gz
- Upload date:
- Size: 65.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/51.0.0.post20201207 requests-toolbelt/0.9.1 tqdm/4.54.1 CPython/3.7.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e8a927b523b68746fe0eaa1e3bb64f81cc404ffc76f8a0322f9255d0da820129 |
|
MD5 | b69c45857bdbaf8228b842773e2e8ec9 |
|
BLAKE2b-256 | e31aa1ac2967621e3b69d424b4d21d2aace5188c935c65c27b9e6165a1dd3bf5 |
File details
Details for the file ravens-0.1-py3-none-any.whl
.
File metadata
- Download URL: ravens-0.1-py3-none-any.whl
- Upload date:
- Size: 108.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.25.0 setuptools/51.0.0.post20201207 requests-toolbelt/0.9.1 tqdm/4.54.1 CPython/3.7.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7968738180240bc14df31f88daf200504aab116a0ca9ee5a8dc8d26b05e9f2c6 |
|
MD5 | e86d49c381b9a7b748a119ac9180d774 |
|
BLAKE2b-256 | cf36b53fc338830207d6886bc742b3daee68ed3da4376800e7be0434830503e7 |