Reinforcement learning benchmark problems set in dynamic environments.
Project description
Dynamic reinforcement learning benchmarks
This repository contains three open source reinforcement learning environments that require the agent to adapt its behavior to or make use of dynamic elements in the environment in order to solve the task. The environments follow OpenAI's gym interface.
Installation
With python3.7 or higher run
pip install dyn_rl_benchmarks
Usage
After importing the package dyn_rl_benchmarks
the environments
- Platforms-v1
- Drawbridge-v1
- Tennis2D-v1
are registered and can be instantiated via gym.make
.
The following example runs Platforms-v1 with randomly sampled actions:
import gym
import dyn_rl_benchmarks
env = gym.make("Platforms-v1")
obs = env.reset()
done = False
while not done:
action = env.action_space.sample()
obs, rew, done, info = env.step(action)
env.render()
How to cite
@article{gurtler2021hierarchical,
title={Hierarchical Reinforcement Learning with Timed Subgoals},
author={G{\"u}rtler, Nico and B{\"u}chler, Dieter and Martius, Georg},
journal={Advances in Neural Information Processing Systems},
volume={34},
year={2021}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
dyn_rl_benchmarks-1.0.3.tar.gz
(35.0 kB
view hashes)
Built Distribution
Close
Hashes for dyn_rl_benchmarks-1.0.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a081511aa0f80fe3b730187584f9c2056ca7afe22f1e22314a5479b915cd0ef5 |
|
MD5 | c1d8cbfbcd7e06dad211b62daa67687c |
|
BLAKE2b-256 | d1bbcde949b4d95d784eabe1306bec267f0dbb07ac8db8fae0ef90fe16c40c47 |