"envpool"
Project description
EnvPool is a highly parallel reinforcement learning environment execution engine which significantly outperforms existing environment executors. With a curated design dedicated to the RL use case, we leverage techniques of a general asynchronous execution model, implemented with C++ thread pool on the environment execution.
Here are EnvPool's several highlights:
- Compatible with OpenAI
gym
APIs and DeepMinddm_env
APIs; - Manage a pool of envs, interact with the envs in batched APIs by default;
- Synchronous execution API and asynchronous execution API;
- Easy C++ developer API to add new envs;
- 1 Million Atari frames per second simulation with 256 CPU cores, ~13x throughput of Python subprocess-based vector env;
- ~3x throughput of Python subprocess-based vector env on low resource setup like 12 CPU cores;
- Comparing with existing GPU-based solution (Brax / Isaac-gym), EnvPool is a general solution for various kinds of speeding-up RL environment parallelization;
- Compatible with some existing RL libraries, e.g., Tianshou.
Installation
PyPI
EnvPool is currently hosted on PyPI. It requires Python >= 3.7.
You can simply install EnvPool with the following command:
$ pip install envpool
After installation, open a Python console and type
import envpool
print(envpool.__version__)
If no error occurs, you have successfully installed EnvPool.
From Source
Please refer to the guideline.
Documentation
The tutorials and API documentation are hosted on envpool.readthedocs.io.
The example scripts are under examples/ folder.
Supported Environments
We're in the progress of open-sourcing all available envs from our internal version, stay tuned.
- Atari via ALE
- Single/Multi players Vizdoom
- Classic RL envs, including CartPole, MountainCar, ...
Benchmark Results
We perform our benchmarks with ALE Atari environment (with environment wrappers) on different hardware setups, including a TPUv3-8 virtual machine (VM) of 96 CPU cores and 2 NUMA nodes, and an NVIDIA DGX-A100 of 256 CPU cores with 8 NUMA nodes. Baselines include 1) naive Python for-loop; 2) the most popular RL environment parallelization execution by Python subprocess, e.g., gym.vector_env; 3) to our knowledge, the fastest RL environment executor Sample Factory before EnvPool.
We report EnvPool performance with sync mode, async mode and NUMA + async mode, compared with the baselines on different number of workers (i.e., number of CPU cores). As we can see from the results, EnvPool achieves significant improvements over the baselines on all settings. On the high-end setup, EnvPool achieves 1 Million frames per second on 256 CPU cores, which is 13.3x of the gym.vector_env
baseline. On a typical PC setup with 12 CPU cores, EnvPool's throughput is 2.8x of gym.vector_env
.
Our benchmark script is in examples/benchmark.py. The detail configurations of 4 types of system are:
- Personal laptop: 12 core
Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
- TPU-VM: 96 core
Intel(R) Xeon(R) CPU @ 2.00GHz
- Apollo: 96 core
AMD EPYC 7352 24-Core Processor
- DGX-A100: 256 core
AMD EPYC 7742 64-Core Processor
Highest FPS | Laptop (12) | TPU-VM (96) | Apollo (96) | DGX-A100 (256) |
---|---|---|---|---|
For-loop | 4,876 | 3,817 | 4,053 | 4,336 |
Subprocess | 18,249 | 42,885 | 19,560 | 79,509 |
Sample Factory | 27,035 | 192,074 | 262,963 | 639,389 |
EnvPool (sync) | 40,791 | 175,938 | 159,191 | 470,170 |
EnvPool (async) | 50,513 | 352,243 | 410,941 | 845,537 |
EnvPool (numa+async) | / | 367,799 | 458,414 | 1,060,371 |
API Usage
The following content shows both synchronous and asynchronous API usage of EnvPool. You can also run the full script at examples/env_step.py
Synchronous API
import envpool
import numpy as np
# make gym env
env = envpool.make("Pong-v5", env_type="gym", num_envs=100)
# or use envpool.make_gym(...)
obs = env.reset() # should be (100, 4, 84, 84)
act = np.zeros(100, dtype=int)
obs, rew, done, info = env.step(act)
Under the synchronous mode, envpool
closely resembles openai-gym
/dm-env
. It has the reset
and step
function with the same meaning. There is one exception though, in envpool
batch interaction is the default. Therefore, during creation of the envpool, there is a num_envs
argument that denotes how many envs you like to run in parallel.
env = envpool.make("Pong-v5", env_type="gym", num_envs=100)
The first dimension of action
passed to the step function should be equal to num_envs
.
act = np.zeros(100, dtype=int)
You don't need to manually reset one environment when any of done
is true, instead, all envs in envpool
has enabled auto-reset by default.
Asynchronous API
import envpool
import numpy as np
# make asynchronous
env = envpool.make("Pong-v5", env_type="gym", num_envs=64, batch_size=16)
env.async_reset() # send the initial reset signal to all envs
while True:
obs, rew, done, info = env.recv()
action = np.random.randint(batch_size, size=len(info.env_id))
env.send(action, env_id)
In the asynchronous mode, the step
function is splitted into two part, namely the send
/recv
functions. send
takes two arguments, a batch of action, and the corresponding env_id
that each action should be sent to. Unlike step
, send
does not wait for the envs to execute and return the next state, it returns immediately after the actions are fed to the envs. (The reason why it is called async mode).
env.send(action, env_id)
To get the "next states", we need to call the recv
function. However, recv
does not guarantee that you will get back the "next states" of the envs that you just called send
on. Instead, whatever envs finishes execution first gets recv
ed first.
state = env.recv()
Besides num_envs
, there's one more argument batch_size
. While num_envs
defines how many envs in total is being managed by the envpool
, batch_size
defines the number of envs involved each time we interact with envpool
. e.g. There're 64 envs executing in the envpool
, send
and recv
each time interacts with a batch of 16 envs.
envpool.make("Pong-v5", env_type="gym", num_envs=64, batch_size=16)
There are other configurable arguments with envpool.make
, please check out envpool interface introduction.
Contributing
EnvPool is still under development. More environments are going to be added and we always welcome contributions to help EnvPool better. If you would like to contribute, please check out our contribution guideline.
License
EnvPool is under Apache2 license.
Other third party source-code and data are under their corresponding licenses.
We do not include their source-code and data in this repo.
Citing EnvPool
If you find EnvPool useful, please cite it in your publications.
[Coming soon!]
Disclaimer
This is not an official Sea Limited or Garena Online Private Limited product.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
File details
Details for the file envpool-0.4.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: envpool-0.4.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 3.3 MB
- Tags: CPython 3.9, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.4.0 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 5f24593bb5f3eae8071003377ee635f03b84125f2b7d63a23af0f05458cbf9e9 |
|
MD5 | 161d30d71cf3c0adb47c1e23ef4b7405 |
|
BLAKE2b-256 | a102984fd2da100d6efe8a49f7e617927bc41860c0087d0d97e4d3bdcb22317a |
File details
Details for the file envpool-0.4.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: envpool-0.4.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 3.3 MB
- Tags: CPython 3.8, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.4.0 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 24a238defc0e3394cbb3d103c51a8bb1c449058128f4a690280b78deaad4a425 |
|
MD5 | 91a30c09cf6fcb311e349d6f57601ed3 |
|
BLAKE2b-256 | 17214ba2ce130b0efe78d38b2613c908ce06b3120e26259c73598b974af2d135 |
File details
Details for the file envpool-0.4.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: envpool-0.4.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 3.3 MB
- Tags: CPython 3.7m, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.4.0 pkginfo/1.7.0 requests/2.22.0 requests-toolbelt/0.9.1 tqdm/4.61.0 CPython/3.8.10
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c257a0fa4442a71156b53fa39d00f58b13bff4741758a5d12b832521fa67f5f1 |
|
MD5 | cce04fce7c5b2a1eecf779118b67a200 |
|
BLAKE2b-256 | f352fd410abdc45c02e0b0371ea30733303622c31fdbe16a9025a4f71d824a73 |