Skip to main content

Python Boilerplate that contains all the code you need to create a Python package.

Project description

ARLBench Logo ARLBench Logo

Python License Test Doc Status


🦾 Automated Reinforcement Learning Benchmark

The ARLBench is a benchmark for HPO in RL - evaluate your HPO methods fast and on a representative number of environments! For more information, see our documentation. The dataset is available at HuggingFace.

Features

  • Lightning-fast JAX-Based implementations of DQN, PPO, and SAC
  • Compatible with many different environment domains via Gymnax, XLand and EnvPool
  • Representative benchmark set of HPO settings

ARLBench Subsets

Installation

There are currently two different ways to install ARLBench. Whichever you choose, we recommend to create a virtual environment for the installation:

conda create -n arlbench python=3.10
conda activate arlbench

The instructions below will help you install the default version of ARLBench with the CPU version of JAX. If you want to run the ARLBench on GPU, we recommend you check out the JAX installation guide to see how you can install the correct version for your GPU setup before proceeding.

PyPI You can install ARLBench using `pip`:
pip install arlbench

If you want to use envpool environments (not currently supported for Mac!), instead choose:

pip install arlbench[envpool]
From source: GitHub First, you need to clone the ARLBench reopsitory:
git clone git@github.com:automl/arlbench.git
cd arlbench

Then you can install the benchmark. For the base version, use:

make install

For the envpool functionality (not available on Mac!), instead use:

make install-envpool

[!CAUTION] Windows is currently not supported and also not tested. We recommend using the Linux subsytem if you're on a Windows machine.

Quickstart

Here are the two ways you can use ARLBench: via the command line or as an environment. To see them in action, take a look at our examples.

Use the CLI

We provide a command line script for black-box configuration in ARLBench which will also save the results in a 'results' directory. To execute one run of DQN on CartPole, simply run:

python run_arlbench.py

You can use the hydra command line syntax to override some of the configuration like this to change to PPO:

python run_arlbench.py algorithm=ppo

Or run multiple different seeds after one another:

python run_arlbench.py -m autorl.seed=0,1,2,3,4

All hyperparamters to adapt are in the 'hpo_config' and architecture settings in the 'nas_config', so to run a grid of different configurations for 5 seeds each , you can do this:

python run_arlbench.py -m autorl.seed=0,1,2,3,4 nas_config.hidden_size=8,16,32 hp_config.learning_rate=0.001,0.01

We recommend you create your own custom config files if using the CLI (for more information on this, checkout Hydra's guide to config files). Our examples can show you how these can look.

Use the AutoRL environment

If you want to have specific control over the ARLBench loop, want to do dynamic configuration or learn based on the agent state, you should use the environment-like interface of ARLBench in your script.

To do so, import ARLBench and use the AutoRLEnv to run an RL agent:

from arlbench import AutoRLEnv

env = AutoRLEnv()

obs, info = env.reset()

action = env.config_space.sample_configuration()
obs, objectives, term, trunc, info = env.step(action)

Just like with RL agents, you can call 'step' multiple times until termination (which you define via the AutoRLEnv's config). For all configuration options, check out our documentation.

Cite Us

If you use ARLBench in your work, please cite us:

@misc{beckdierkes24,
  author    = {J. Becktepe and J. Dierkes and C. Benjamins and D. Salinas and A. Mohan and R. Rajan and F. Hutter and H. Hoos and M. Lindauer and T. Eimer},
  title     = {ARLBench},
  year      = {2024},
  url = {https://github.com/automl/arlbench},

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ARLBench-0.1.3.tar.gz (654.0 kB view details)

Uploaded Source

Built Distribution

ARLBench-0.1.3-py3-none-any.whl (79.3 kB view details)

Uploaded Python 3

File details

Details for the file ARLBench-0.1.3.tar.gz.

File metadata

  • Download URL: ARLBench-0.1.3.tar.gz
  • Upload date:
  • Size: 654.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.14

File hashes

Hashes for ARLBench-0.1.3.tar.gz
Algorithm Hash digest
SHA256 f49055b7ed1f1bd648d4d763ceb6f727e93fcef718c756608a3e4104034620d2
MD5 a6edb72537e17c8e80042fda5cedefbc
BLAKE2b-256 5c5d4ded2afb23488c3ecc5c20e94384dafadd4b7c7ffed11988f96ecaab7060

See more details on using hashes here.

File details

Details for the file ARLBench-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: ARLBench-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 79.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.14

File hashes

Hashes for ARLBench-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 a735ac560716d3e5c84322f22fd2aabc8f993b0be058a80061344c28149c1fb9
MD5 e4b6f1eb892a049d49aeb5ee9f97999d
BLAKE2b-256 742ffb5f6929efdeef14b2be52697e7de5a43025326f07d6c89fa55784547d2c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page