Skip to main content

robomimic: A Modular Framework for Robot Learning from Demonstration

Project description

robomimic

[Homepage][Documentation][Study Paper][Study Website][ARISE Initiative]


Latest Updates

  • [07/03/2023] v0.3.0: BC-Transformer and IQL :brain:, support for DeepMind MuJoCo bindings :robot:, pre-trained image reps :eye:, wandb logging :chart_with_upwards_trend:, and more
  • [05/23/2022] v0.2.1: Updated website and documentation to feature more tutorials :notebook_with_decorative_cover:
  • [12/16/2021] v0.2.0: Modular observation modalities and encoders :wrench:, support for MOMART datasets :open_file_folder: [release notes] [documentation]
  • [08/09/2021] v0.1.0: Initial code and paper release

Colab quickstart

Get started with a quick colab notebook demo of robomimic with installing anything locally.

Open In Colab


robomimic is a framework for robot learning from demonstration. It offers a broad set of demonstration datasets collected on robot manipulation domains and offline learning algorithms to learn from these datasets. robomimic aims to make robot learning broadly accessible and reproducible, allowing researchers and practitioners to benchmark tasks and algorithms fairly and to develop the next generation of robot learning algorithms.

Core Features

Reproducing benchmarks

The robomimic framework also makes reproducing the results from different benchmarks and datasets easy. See the datasets page for more information on downloading datasets and reproducing experiments.

Troubleshooting

Please see the troubleshooting section for common fixes, or submit an issue on our github page.

Contributing to robomimic

This project is part of the broader Advancing Robot Intelligence through Simulated Environments (ARISE) Initiative, with the aim of lowering the barriers of entry for cutting-edge research at the intersection of AI and Robotics. The project originally began development in late 2018 by researchers in the Stanford Vision and Learning Lab (SVL). Now it is actively maintained and used for robotics research projects across multiple labs. We welcome community contributions to this project. For details please check our contributing guidelines.

Citation

Please cite this paper if you use this framework in your work:

@inproceedings{robomimic2021,
  title={What Matters in Learning from Offline Human Demonstrations for Robot Manipulation},
  author={Ajay Mandlekar and Danfei Xu and Josiah Wong and Soroush Nasiriany and Chen Wang and Rohun Kulkarni and Li Fei-Fei and Silvio Savarese and Yuke Zhu and Roberto Mart\'{i}n-Mart\'{i}n},
  booktitle={Conference on Robot Learning (CoRL)},
  year={2021}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

robomimic-0.3.0.tar.gz (215.3 kB view details)

Uploaded Source

File details

Details for the file robomimic-0.3.0.tar.gz.

File metadata

  • Download URL: robomimic-0.3.0.tar.gz
  • Upload date:
  • Size: 215.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.8.12

File hashes

Hashes for robomimic-0.3.0.tar.gz
Algorithm Hash digest
SHA256 b60c1a196469f898a807258ea7943adcb52c6521777abae7fccf92b509f6490a
MD5 2951e507539d74eb3df24c2d14ba2756
BLAKE2b-256 8ef221769478588f72482a8be96cc5235bcb3b852c97a0156250a0c38a61eaa3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page