Skip to main content

distractor control suite contains variants of the DeepMind Control suite with visual distraction

Project description

This is a packaged version of the distracting_control suite from Stone et al. We provide OpenAI gym bindings, to make the original code-block base easier to use.

Getting Started

pip install distracting_control

Then in your python script:

import gym

env = gyn.make('gdc:Hopper-hop-easy-v1', from_pixel=True)
obs = env.reset()
doc.figure(obs, "figures/hopper_readme.png?raw=true")

Detailed API

Take a look at the test file in the https://github.com/geyang/distracting_control/blob/master/specs folder, and the source code-block. DeepMind control has a lot of low level binding burried in the source code-block.

def test_max_episode_steps():
    env = gym.make('distracting_control:Walker-walk-easy-v1')
    assert env._max_episode_steps == 250


def test_flat_obs():
    env = gym.make('distracting_control:Walker-walk-easy-v1', frame_skip=4)
    env.env.env.env.observation_spec()
    assert env.reset().shape == (24,)


def test_frame_skip():
    env = gym.make('distracting_control:Walker-walk-easy-v1', from_pixels=True, frame_skip=8)
    assert env._max_episode_steps == 125


def test_channel_first():
    env = gym.make('distracting_control:Walker-walk-easy-v1', from_pixels=True, channels_first=True)
    assert env.reset().shape == (3, 84, 84)


def test_channel_last():
    env = gym.make('distracting_control:Walker-walk-easy-v1', from_pixels=True, frame_skip=8, channels_first=False)
    assert env._max_episode_steps == 125
    assert env.reset().shape == (84, 84, 3)

Important Changes from Stone et al

  1. [planned] remove tensorflow dependency

  2. [planned] increase ground floor transparency in Hopper

Original README

distracting_control extends dm_control with static or dynamic visual distractions in the form of changing colors, backgrounds, and camera poses. Details and experimental results can be found in our paper.

Requirements and Installation

  • Clone this repository

  • sh run.sh

  • Follow the instructions and install dm_control. Make sure you setup your MuJoCo keys correctly.

  • Download the DAVIS 2017 dataset. Make sure to select the 2017 TrainVal - Images and Annotations (480p). The training images will be used as distracting backgrounds.

Instructions

  • You can run the distracting_control_demo to generate sample images of the different tasks at different difficulties:

    python distracting_control_demo --davis_path=$HOME/DAVIS/JPEGImages/480p/
    --output_dir=/tmp/distrtacting_control_demo
  • As seen from the demo to generate an instance of the environment you simply need to import the suite and use suite.load while specifying the dm_control domain and task, then choosing a difficulty and providing the dataset_path.

  • Note the environment follows the dm_control environment APIs.

Paper

If you use this code-block, please cite the accompanying paper as:

@article{stone2021distracting,
      title={The Distracting Control Suite -- A Challenging Benchmark for Reinforcement Learning from Pixels},
      author={Austin Stone and Oscar Ramirez and Kurt Konolige and Rico Jonschkowski},
      year={2021},
      journal={arXiv preprint arXiv:2101.02722},
}

Disclaimer

This is not an official Google product.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

distracting_control-0.1.1rc3-py3-none-any.whl (30.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page