Industry-Driven Hardware-Accelerated RL Environments
Project description
Installation | Quickstart | Examples | Environments | Citation | See Also | Reference Docs
Welcome to the Jungle! 🌴
Jumanji is a suite of Reinforcement Learning (RL) environments written in JAX providing clean, hardware-accelerated environments for industry-driven research.
Jumanji is helping pioneer a new wave of hardware-accelerated research and development in the field of RL. Jumanji's high-speed environments enable faster iteration and larger-scale experimentation while simultaneously reducing complexity. Originating in the Research Team at InstaDeep, Jumanji is now developed jointly with the open-source community. To join us in these efforts, reach out, raise issues and read our contribution guidelines (or just star 🌟 to stay up to date with the latest developments)!
Goals 🚀
- Provide a simple, well-tested API for JAX-based environments.
- Make research in RL more accessible.
- Facilitate the research on RL for problems in the industry and help close the gap between research and industrial applications.
Overview 🦜
- 🥑 Environment API: core abstractions for JAX-based environments and their variations, e.g. multi-agent or turn-by-turn.
- 🕹️ Environment Suite: a list of RL environments ranging from simple games to complex NP-hard problems.
- 🍬 Wrappers: easily connect to your favourite RL frameworks and libraries such as Acme, Stable Baselines3, RLlib, OpenAI Gym, and DeepMind-Env.
- 🎓 Educational Examples and User Guides: guides to facilitate Jumanji's adoption and highlight the added value of JAX-based environments.
Installation 🎬
You can install the latest release of Jumanji from PyPI:
pip install jumanji
or you can install the latest development version directly from GitHub:
pip install git+https://github.com/instadeepai/jumanji.git
Jumanji has been tested on Python 3.7, 3.8 and 3.9. Note that because the installation of JAX differs depending on your hardware accelerator, we advise users to explicitly install the correct JAX version (see the official installation guide).
Rendering: Matplotlib is used for rendering the BinPack
and Snake
environments.
To visualize the environments you will need a GUI backend.
For example, on Linux, you can install Tk via: apt-get install python3-tk
, or using conda: conda install tk
.
Check out Matplotlib backends for a list of backends you could use.
Quickstart ⚡
Practitioners will find Jumanji's interface familiar
as it combines the widely adopted OpenAI Gym
and DeepMind Environment interfaces.
From OpenAI Gym, we adopted the idea of a registry
and the render
method,
while our TimeStep
structure is inspired by dm_env.TimeStep
.
Basic Usage 🧑💻
import jax
import jumanji
# Instantiate a Jumanji environment using the registry
env = jumanji.make('Snake-6x6-v0')
# Reset your (jit-able) environment
key = jax.random.PRNGKey(0)
state, timestep = jax.jit(env.reset)(key)
# (Optional) Render the env state
env.render(state)
# Interact with the (jit-able) environment
action = env.action_spec().generate_value() # Action selection (dummy value here)
state, timestep = jax.jit(env.step)(state, action) # Take a step and observe the next state and time step
where:
state
represents the internal state of an environment: it contains all the information required to take a step when executing an action. This should not be confused with theobservation
contained in thetimestep
, which is the information perceived by the agent.timestep
is a dataclass containingstep_type
,reward
,discount
,observation
, andextras
. This structure is similar todm_env.TimeStep
except for theextras
field that was added to allow users to retrieve information that is neither part of the agent's observation nor part of the environment's internal state.
Advanced Usage 🧑🔬
Being written in JAX, Jumanji's environments benefit from many of its features including
automatic vectorization/parallelization (jax.vmap
, jax.pmap
) and JIT-compilation (jax.jit
),
which can be composed arbitrarily.
We provide an example of this below, where we use jax.vmap
and jax.lax.scan
to generate a batch
of rollouts in the Snake
environment.
import jax
import jumanji
from jumanji.wrappers import AutoResetWrapper
env = jumanji.make("Snake-6x6-v0") # Creates the snake environment.
env = AutoResetWrapper(env) # Automatically reset the environment when an episode terminates.
batch_size, rollout_length = 7, 5
num_actions = env.action_spec().num_values
random_key = jax.random.PRNGKey(0)
key1, key2 = jax.random.split(random_key)
def step_fn(state, key):
action = jax.random.randint(key=key, minval=0, maxval=num_actions, shape=())
new_state, timestep = env.step(state, action)
return new_state, timestep
def run_n_step(state, key, n):
random_keys = jax.random.split(key, n)
state, rollout = jax.lax.scan(step_fn, state, random_keys)
return rollout
# Instantiate a batch of environment states
keys = jax.random.split(key1, batch_size)
state, timestep = jax.vmap(env.reset)(keys)
# Collect a batch of rollouts
keys = jax.random.split(key2, batch_size)
rollout = jax.vmap(run_n_step, in_axes=(0, 0, None))(state, keys, rollout_length)
# Shape and type of given rollout:
# TimeStep(step_type=(7, 5), reward=(7, 5), discount=(7, 5), observation=(7, 5, 6, 6, 5), extras=None)
Examples 🕹️
For more in-depth examples of running with Jumanji environments, check out our Colab notebooks:
Example | Topic | Colab |
---|---|---|
Online Q-Learning | RL Training (Anakin) |
Environments 🌍
Jumanji implements different types of environments ranging from simple games to NP-hard problems, from single-agent to multi-agent and turn-by-turn games.
Environment | Category | Type | Source | Description |
---|---|---|---|---|
🐍 Snake | Game | Single-agent | code | doc |
4️⃣ Connect4 | Game | Turn-by-turn | code | doc |
📬 TSP (Travelling Salesman Problem) | Combinatorial | Single-agent | code | doc |
🎒 Knapsack | Combinatorial | Single-agent | code | doc |
🪢 Routing | Combinatorial | Multi-agent | code | doc |
📦 BinPack (3D BinPacking Problem) | Combinatorial | Single-agent | code | doc |
🚚 CVRP (Capacitated Vehicle Routing Problem) | Combinatorial | Single-agent | code | doc |
Registry and Versioning 📖
Similarly to OpenAI Gym, Jumanji keeps a strict versioning of its environments for reproducibility reasons.
We maintain a registry of standard environments with their configuration.
For each environment, a version suffix is appended, e.g. Snake-6x6-v0
.
When changes are made to environments that might impact learning results,
the version number is incremented by one to prevent potential confusion.
For a full list of registered versions of each environment, check out the documentation.
Contributing 🤝
Contributions are welcome! See our issue tracker for good first issues. Please read our contributing guidelines for details on how to submit pull requests, our Contributor License Agreement, and community guidelines.
Citing Jumanji ✏️
If you use Jumanji in your work, please cite the library using:
@software{jumanji2022github,
author = {Clément Bonnet and Donal Byrne and Victor Le and Laurence Midgley
and Daniel Luo and Cemlyn Waters and Sasha Abramowitz and Edan Toledo
and Cyprien Courtot and Matthew Morris and Daniel Furelos-Blanco
and Nathan Grinsztajn and Thomas D. Barrett and Alexandre Laterre},
title = {Jumanji: Industry-Driven Hardware-Accelerated RL Environments},
url = {https://github.com/instadeepai/jumanji},
version = {0.1.2},
year = {2022},
}
See Also
Other works have embraced the approach of writing RL environments in JAX. In particular, we suggest users check out the following sister repositories:
- 🦾 Brax is a differentiable physics engine that simulates environments made up of rigid bodies, joints, and actuators.
- 🏋️ Gymnax implements classic environments including classic control, bsuite, MinAtar and a collection of meta RL tasks.
- 🌳 Evojax provides tools to enable neuroevolution algorithms to work with neural networks running across multiple TPU/GPUs.
- 🤖 Qdax is a library to accelerate Quality-Diversity and neuro-evolution algorithms through hardware accelerators and parallelization.
Acknowledgements 🙏
The development of this library was supported with Cloud TPUs from Google's TPU Research Cloud (TRC) 🌤.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.