distractor control suite contains variants of the DeepMind Control suite with visual distraction
Project description
This is a packaged version of the distracting_control suite from Stone et al. We provide OpenAI gym bindings, to make the original code-block base easier to use.
Getting Started
pip install distracting_control
Then in your python script:
import gym
env = gyn.make('gdc:Hopper-hop-easy-v1', from_pixel=True)
obs = env.reset()
doc.figure(obs, "figures/hopper_readme.png?raw=true")
Detailed API
Take a look at the test file in the https://github.com/geyang/distracting_control/blob/master/specs folder, and the source code-block. DeepMind control has a lot of low level binding burried in the source code-block.
def test_max_episode_steps():
env = gym.make('distracting_control:Walker-walk-easy-v1')
assert env._max_episode_steps == 250
def test_flat_obs():
env = gym.make('distracting_control:Walker-walk-easy-v1', frame_skip=4)
env.env.env.env.observation_spec()
assert env.reset().shape == (24,)
def test_frame_skip():
env = gym.make('distracting_control:Walker-walk-easy-v1', from_pixels=True, frame_skip=8)
assert env._max_episode_steps == 125
def test_channel_first():
env = gym.make('distracting_control:Walker-walk-easy-v1', from_pixels=True, channels_first=True)
assert env.reset().shape == (3, 84, 84)
def test_channel_last():
env = gym.make('distracting_control:Walker-walk-easy-v1', from_pixels=True, frame_skip=8, channels_first=False)
assert env._max_episode_steps == 125
assert env.reset().shape == (84, 84, 3)
Important Changes from Stone et al
[planned] remove tensorflow dependency
[planned] increase ground floor transparency in Hopper
Original README
distracting_control extends dm_control with static or dynamic visual distractions in the form of changing colors, backgrounds, and camera poses. Details and experimental results can be found in our paper.
Requirements and Installation
Clone this repository
sh run.sh
Follow the instructions and install dm_control. Make sure you setup your MuJoCo keys correctly.
Download the DAVIS 2017 dataset. Make sure to select the 2017 TrainVal - Images and Annotations (480p). The training images will be used as distracting backgrounds.
Instructions
You can run the distracting_control_demo to generate sample images of the different tasks at different difficulties:
python distracting_control_demo --davis_path=$HOME/DAVIS/JPEGImages/480p/ --output_dir=/tmp/distrtacting_control_demo
As seen from the demo to generate an instance of the environment you simply need to import the suite and use suite.load while specifying the dm_control domain and task, then choosing a difficulty and providing the dataset_path.
Note the environment follows the dm_control environment APIs.
Paper
If you use this code-block, please cite the accompanying paper as:
@article{stone2021distracting, title={The Distracting Control Suite -- A Challenging Benchmark for Reinforcement Learning from Pixels}, author={Austin Stone and Oscar Ramirez and Kurt Konolige and Rico Jonschkowski}, year={2021}, journal={arXiv preprint arXiv:2101.02722}, }
Disclaimer
This is not an official Google product.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file distracting_control-0.1.1rc3-py3-none-any.whl
.
File metadata
- Download URL: distracting_control-0.1.1rc3-py3-none-any.whl
- Upload date:
- Size: 30.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/3.10.0 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.60.0 CPython/3.8.8
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | aa63c7eb767191c44016aacd41d1d7a7bf4b3ec91d21428192dda630d0bd9145 |
|
MD5 | dc6a0e45d67377dddce8a3a6d6c0f9d7 |
|
BLAKE2b-256 | 8a4eed2037a22555c73fe18cf101b64783a19e1887b99adde3aa1625aac5b923 |