Skip to main content

robosuite: A Modular Simulation Framework and Benchmark for Robot Learning

Project description

robosuite

[Homepage][White Paper][Documentations][ARISE Initiative]


Latest Updates

  • [10/28/2024] v1.5: Added support for diverse robot embodiments (including humanoids), custom robot composition, composite controllers (including whole body controllers), more teleoperation devices, photo-realistic rendering. [release notes] [documentation]

  • [11/15/2022] v1.4: Backend migration to DeepMind's official MuJoCo Python binding, robot textures, and bug fixes :robot: [release notes] [documentation]

  • [10/19/2021] v1.3: Ray tracing and physically based rendering tools :sparkles: and access to additional vision modalities 🎥 [video spotlight] [release notes] [documentation]

  • [02/17/2021] v1.2: Added observable sensor models :eyes: and dynamics randomization :game_die: [release notes]

  • [12/17/2020] v1.1: Refactored infrastructure and standardized model classes for much easier environment prototyping :wrench: [release notes]


robosuite is a simulation framework powered by the MuJoCo physics engine for robot learning. It also offers a suite of benchmark environments for reproducible research. The current release (v1.5) features support for diverse robot embodiments (including humanoids), custom robot composition, composite controllers (including whole body controllers), more teleoperation devices, photo-realistic rendering. This project is part of the broader Advancing Robot Intelligence through Simulated Environments (ARISE) Initiative, with the aim of lowering the barriers of entry for cutting-edge research at the intersection of AI and Robotics.

Data-driven algorithms, such as reinforcement learning and imitation learning, provide a powerful and generic tool in robotics. These learning paradigms, fueled by new advances in deep learning, have achieved some exciting successes in a variety of robot control problems. However, the challenges of reproducibility and the limited accessibility of robot hardware (especially during a pandemic) have impaired research progress. The overarching goal of robosuite is to provide researchers with:

  • a standardized set of benchmarking tasks for rigorous evaluation and algorithm development;
  • a modular design that offers great flexibility in designing new robot simulation environments;
  • a high-quality implementation of robot controllers and off-the-shelf learning algorithms to lower the barriers to entry.

This framework was originally developed in late 2017 by researchers in Stanford Vision and Learning Lab (SVL) as an internal tool for robot learning research. Now, it is actively maintained and used for robotics research projects in SVL, the UT Robot Perception and Learning Lab (RPL) and NVIDIA Generalist Embodied Agent Research Group (GEAR). We welcome community contributions to this project. For details, please check out our contributing guidelines.

Robosuite offers a modular design of APIs for building new environments, robot embodiments, and robot controllers with procedural generation. We highlight these primary features below:

  • standardized tasks: a set of standardized manipulation tasks of large diversity and varying complexity and RL benchmarking results for reproducible research;
  • procedural generation: modular APIs for programmatically creating new environments and new tasks as combinations of robot models, arenas, and parameterized 3D objects. Check out our repo robosuite_models for extra robot models tailored to robosuite.
  • robot controllers: a selection of controller types to command the robots, such as joint-space velocity control, inverse kinematics control, operational space control, and whole body control;
  • teleoperation devices: a selection of teleoperation devices including keyboard, spacemouse and MuJoCo viewer drag-drop;
  • multi-modal sensors: heterogeneous types of sensory signals, including low-level physical states, RGB cameras, depth maps, and proprioception;
  • human demonstrations: utilities for collecting human demonstrations, replaying demonstration datasets, and leveraging demonstration data for learning. Check out our sister project robomimic;
  • photorealistic rendering: integration with advanced graphics tools that provide real-time photorealistic renderings of simulated scenes, including support for NVIDIA Isaac Sim rendering.

Citation

Please cite robosuite if you use this framework in your publications:

@inproceedings{robosuite2020,
  title={robosuite: A Modular Simulation Framework and Benchmark for Robot Learning},
  author={Yuke Zhu and Josiah Wong and Ajay Mandlekar and Roberto Mart\'{i}n-Mart\'{i}n and Abhishek Joshi and Soroush Nasiriany and Yifeng Zhu and Kevin Lin},
  booktitle={arXiv preprint arXiv:2009.12293},
  year={2020}
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

robosuite-1.5.0.tar.gz (148.9 MB view details)

Uploaded Source

Built Distribution

robosuite-1.5.0-py3-none-any.whl (150.1 MB view details)

Uploaded Python 3

File details

Details for the file robosuite-1.5.0.tar.gz.

File metadata

  • Download URL: robosuite-1.5.0.tar.gz
  • Upload date:
  • Size: 148.9 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for robosuite-1.5.0.tar.gz
Algorithm Hash digest
SHA256 206d80a2005b10b2d7eedf5e80685db3f64706db9f15326f43ff9ec5e9a45e64
MD5 54c6b0db725c41100f973481e81d52a5
BLAKE2b-256 6e1032456212f5c8b389874fd5a6c4946908ea23d93a3d458a62b737e8799b89

See more details on using hashes here.

File details

Details for the file robosuite-1.5.0-py3-none-any.whl.

File metadata

  • Download URL: robosuite-1.5.0-py3-none-any.whl
  • Upload date:
  • Size: 150.1 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for robosuite-1.5.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8386e09e84146ec0f17276a12ea54548193e45759af03d356db19036d77ebba2
MD5 d5a77cf8cdf7b464a46a440ffe561bc2
BLAKE2b-256 efa9c048660c38d967989e8033d5aefdf7b4ff2ff42a5bf93da5497af1fc44d3

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page