MiniHack The Planet: A Sandbox for Open-Ended Reinforcement Learning Research
Project description
MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research
MiniHack is a sandbox framework for easily designing rich and diverse environments for Reinforcement Learning (RL). Based on the game of NetHack, arguably the hardest grid-based game in the world, MiniHack uses the NetHack Learning Environment (NLE) to communicate with the game and provide a convenient interface for customly created RL testbeds.
MiniHack already comes with a large list of challenging tasks. However, it is primarily built for easily designing new ones. The motivation behind MiniHack is to be able to perform RL experiments in a controlled setting while being able to increasingly scale the complexity of the tasks.
To this end, MiniHack leverages the description files of NetHack. The description files (or des-files) are human-readable specifications of levels: distributions of grid layouts together with monsters, objects on the floor, dungeon features, etc. The des-files can be compiled into binary using the NetHack level compiler, and MiniHack maps them to Gym environments. We refer users to our brief overview, detailed tutorial, or interactive notebook for further information on des-files.
Our documentation will walk you through everything you need to know about MiniHack, step-by-step, including information on how to get started, configure environments or design new ones, train baseline agents, and much more.
Installation
MiniHack is available on pypi and can be installed as follows:
pip install minihack
We advise using a conda environment for this:
conda create -n minihack python=3.8
conda activate minihack
pip install minihack
NOTE: NLE requires cmake>=3.15
to be installed when building the package. Check out here how to install it on MacOS and Ubuntu 18.04. Windows users should use Docker.
NOTE: Baseline agents have separate installation instructions. See here for more details.
Extending MiniHack
If you wish to extend MiniHack, please install the package as follows:
git clone https://github.com/facebookresearch/minihack
cd minihack
pip install -e ".[dev]"
pre-commit install
Docker
We have provided several Dockerfiles for building images with pre-installed MiniHack. Please follow the instructions described here.
Trying out MiniHack
MiniHack uses the popular Gym interface for the interactions between the agent and the environment. A pre-registered MiniHack environment can be used as follows:
import gym
import minihack
env = gym.make("MiniHack-River-v0")
env.reset() # each reset generates a new environment instance
env.step(1) # move agent '@' north
env.render()
To see the list of all MiniHack environments, run:
python -m minihack.scripts.env_list
The following scripts allow to play MiniHack environments with a keyboard:
# Play the MiniHack in the Terminal as a human
python -m minihack.scripts.play --env MiniHack-River-v0
# Use a random agent
python -m minihack.scripts.play --env MiniHack-River-v0 --mode random
# Play the MiniHack with graphical user interface (gui)
python -m minihack.scripts.play_gui --env MiniHack-River-v0
NOTE: If the package has been properly installed one could run the scripts above with mh-envs
, mh-play
, and mh-guiplay
commands.
Baseline Agents
In order to get started with MiniHack environments, we provide a variety of baselines agent integrations.
TorchBeast
A TorchBeast agent is
bundled in minihack.agent.polybeast
together with a simple model to provide
a starting point for experiments. To install and train this agent, first
install torchbeast by following the instructions here,
then use the following commands:
pip install ".[polybeast]"
python -m minihack.agent.polybeast.polyhydra env=MiniHack-Room-5x5-v0 total_steps=100000
More information on running our TorchBeast agents, and instructions on how to reproduce the results of the paper, can be found here. The learning curves for all of our polybeast experiments can be accessed in our Weights&Biases repository.
RLlib
An RLlib agent is
provided in minihack.agent.rllib
, with a similar model to the torchbeast agent.
This can be used to try out a variety of different RL algorithms. To install and train an RLlib agent, use the following
commands:
pip install ".[rllib]"
python -m minihack.agent.rllib.train algo=dqn env=MiniHack-Room-5x5-v0 total_steps=1000000
More information on running RLlib agents can be found here.
Unsupervised Environment Design
MiniHack also enables research in Unsupervised Environment Design, whereby an adaptive task distribution is learned during training by dynamically adjusting free parameters of the task MDP.
Check out the ucl-dark/paired
repository for replicating the examples from the paper using the PAIRED.
Citation
If you use MiniHack in your work, please cite:
@inproceedings{samvelyan2021minihack,
title={MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research},
author={Mikayel Samvelyan and Robert Kirk and Vitaly Kurin and Jack Parker-Holder and Minqi Jiang and Eric Hambro and Fabio Petroni and Heinrich Kuttler and Edward Grefenstette and Tim Rockt{\"a}schel},
booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)},
year={2021},
url={https://openreview.net/forum?id=skFwlyefkWJ}
}
If you use our example ported environments, please cite the original papers: MiniGrid (see license, bib), Boxoban (see license, bib).
Contributions and Maintenance
We welcome contributions to MiniHack. If you are interested in contributing, please see this document. Our maintenance plan can be found here.
Papers using the MiniHack
- Powers et al. CORA: Benchmarks, Baselines, and a Platform for Continual Reinforcement Learning Agents (CMU, Georgia Tech, AI2, August 2021)
- Samvelyan et al. MiniHack The Planet (FAIR, UCL, Oxford, NeurIPS 2021)
Open a pull request to add papers.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.