Skip to main content

Stanford University Repository for Reinforcement Algorithms

Project description

**`SURREAL <>`__**

| `About <#open-source-distributed-reinforcement-learning-framework>`__
| `Installation <#installation>`__
| `Benchmarking <#benchmarking>`__
| `Citation <#citation>`__

Open-Source Distributed Reinforcement Learning Framework

*Stanford Vision and Learning Lab*

`SURREAL <>`__ is a fully integrated
framework that runs state-of-the-art distributed reinforcement learning
(RL) algorithms.

.. raw:: html

<div align="center">

.. raw:: html


- **Scalability**. RL algorithms are data hungry by nature. Even the
simplest Atari games, like Breakout, typically requires up to a
billion frames to learn a good solution. To accelerate training
significantly, SURREAL parallelizes the environment simulation and
learning. The system can easily scale to thousands of CPUs and
hundreds of GPUs.

- **Flexibility**. SURREAL unifies distributed on-policy and off-policy
learning into a single algorithmic formulation. The key is to
separate experience generation from learning. Parallel actors
generate massive amount of experience data, while a *single,
centralized* learner performs model updates. Each actor interacts
with the environment independently, which allows them to diversify
the exploration for hard long-horizon robotic tasks. They send the
experiences to a centralized buffer, which can be instantiated as a
FIFO queue for on-policy mode and replay memory for off-policy mode.

.. raw:: html

<!--<img src=".README_images/distributed.png" alt="drawing" width="500" />-->

- **Reproducibility**. RL algorithms are notoriously hard to reproduce
[Henderson et al., 2017], due to multiple sources of variations like
algorithm implementation details, library dependencies, and hardware
types. We address this by providing an *end-to-end integrated
pipeline* that replicates our full cluster hardware and software
runtime setup.

.. raw:: html

<!--<img src=".README_images/pipeline.png" alt="drawing" height="250" />-->


| Surreal algorithms can be deployed at various scales. It can run on a
single laptop and solve easier locomotion tasks, or run on hundreds of
machines to solve complex manipulation tasks.
| \* `Surreal on your Laptop <docs/>`__ \* `Surreal on
Google Cloud Kubenetes Engine <docs/>`__
| \* `Customizing Surreal <docs/>`__
| \* `Documentation Index <docs/>`__


- Scalability of Surreal-PPO with up to 1024 actors on Surreal Robotics

.. figure:: .README_images/scalability-robotics.png

- Training curves of 16 actors on OpenAI Gym tasks for 3 hours,
compared to other baselines.


Please cite our CORL paper if you use this repository in your


title={SURREAL: Open-Source Reinforcement Learning Framework and Robot Manipulation Benchmark},
author={Fan, Linxi and Zhu, Yuke and Zhu, Jiren and Liu, Zihua and Zeng, Orien and Gupta, Anchit and Creus-Costa, Joan and Savarese, Silvio and Fei-Fei, Li},
booktitle={Conference on Robot Learning},

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for Surreal, version 0.2.1
Filename, size File type Python version Upload date Hashes
Filename, size Surreal-0.2.1-py3-none-any.whl (144.1 kB) File type Wheel Python version py3 Upload date Hashes View hashes
Filename, size Surreal-0.2.1.tar.gz (134.4 kB) File type Source Python version None Upload date Hashes View hashes

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page