Skip to main content

HyperFetch. A tool to optimize and fetch hyperparameters for your reinforcement learning application. Currently only available on Linux.

Project description

HyperFetch

Prerequisistes

Has been tested with Linux and MacOS with these prerequisites:

  • pip==22.2.2
  • setuptools==64.0.3
  • swig==4.0.2
  • box2d-py==2.3.8

HyperFetch is a tool consisting of:

  • A website for fetching hyperparameters that are tuned by others
  • This pip-module for tuning and saving hyperparameters yourself

The intention of HyperFetch is to:

  • Make recreation of existing projects easier within the reinforcement learning research community.
  • Allow beginners to train and implement their own reinforcement learning models easier due to abstracting away the advanced tuning-step.

The tool is expected to aid in decreasing CO2-emissions related to tuning hyperparameters when training RL models.

By posting tuned [algorithm x environment] combinations to the website it's expected that:

  • Developers/Students can access hyperparameters that have already been optimially tuned instead of having to tune them themselves.
  • Researchers can filter by project on the website and access hyperparameters they wish to recreate/replicate for their own research.
  • Transparancy related to emissions will become more mainstream within the field.

Content

Links

Repository: HyperFetch Github
Documentation: Configuration docs
Website: hyperfetch.online

Using the pip module

To use the pip model please do the following:

  1. Create a virtual environment in your favorite IDE.

    Install virtualenv if you haven't

        pip install virtualenv
    

    Create a virtual environment

        virtualenv [some_name]
    

    Activate virtualenv this way (Linux/MacOS):

         source myvenv/bin/activate
    
  2. Install the pip-module.

     pip install hyperfetch
    

Example 1: tuning + posting using HyperFetch

Here is a quick example of how to tune and run PPO in the Pendulum-v1 environment inside your new or existing project:

Just a reminder:

The pip package must be installed before this can be done. For details, see using the pip module.

1. Create configuration file using YAML (minimal example)

If you are unsure of which configuration values to use, see the config docs

# Required (example values)
alg: ppo
env: Pendulum-v1
project_name: some_project
git_link: github.com/user/some_project

# Some other useful parameters
sampler: tpe
tuner: median
n_trials: 20
log_folder: logs

2. Tune using python file or the command line

from hyperfetch import tuning

# Path to your YAML config file 
config_path = "../some_folder/config_name.yml"

# Writes each trial's best hyperparameters to log folder
tuning.tune(config_path)

Command line:

If in the same directory as the config file and the config file is called "config.yml"

  tune config.yml

If the config file is in another directory and it's called "config.yml"

  tune path/to/config.yml 

Enjoy your hyperparameters!

Example 2: Posting hyperparameters that are not tuned by Hyperfetch

Just a reminder:

The pip package must be installed before this can be done. For details, see using the pip module.

1. Create configuartion YAML file

If you are unsure of which configuration values to use, see the config docs

# Required (example values)
alg: dqn
env: Pendulum-v1
project_name: some_project
git_link: github.com/user/some_project
hyperparameters: # These depend on the choice of algorithm
  batch_size: 256
  buffer_size: 50000
  exploration_final_eps: 0.10717928118310233
  exploration_fraction: 0.3318973226098944
  gamma: 0.9
  learning_rate: 0.0002126832542803243
  learning_starts: 10000
  net_arch: medium
  subsample_steps: 4
  target_update_interval: 1000
  train_freq: 8
  
# Not required (but appreciated)
CO2_emissions: 0.78 #kgs
energy_consumed: 3.27 #kWh
cpu_model: 12th Gen Intel(R) Core(TM) i5-12500H
gpu_model: NVIDIA GeForce RTX 3070
total_time: 0:04:16.842800 # H:M:S:MS

2. Save/post using python file or command line

Python file:

from hyperfetch import tuning

# Path to your YAML config file 
config_path = "../some_folder/config_name.yml"

# Writes each trial's best hyperparameters to log folder
tuning.save(config_path)

Command line:

If in the same directory as the config file and the config file is called "config.yml"

  save config.yml

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hyperfetch-1.1.0.tar.gz (18.5 kB view details)

Uploaded Source

Built Distribution

hyperfetch-1.1.0-py3-none-any.whl (17.9 kB view details)

Uploaded Python 3

File details

Details for the file hyperfetch-1.1.0.tar.gz.

File metadata

  • Download URL: hyperfetch-1.1.0.tar.gz
  • Upload date:
  • Size: 18.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.9

File hashes

Hashes for hyperfetch-1.1.0.tar.gz
Algorithm Hash digest
SHA256 c11b8b5b4dbe277b0a4cc662a7cc0f3cde8a3a677107ac9e82a79a36b6ce947c
MD5 bb6e445656b12acc6d0702e6ced2eb43
BLAKE2b-256 8469fd6a2dfc3023e175ef24bb9bb2fcd1e2b2928206efc2b4aa3e8053446017

See more details on using hashes here.

File details

Details for the file hyperfetch-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: hyperfetch-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 17.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.9

File hashes

Hashes for hyperfetch-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 c1d6fd5945279047221c412e8351d289d44a664ee16ad1df040456e221220ead
MD5 125be18bab8d7346c6aea50d482df76c
BLAKE2b-256 a35093354acdaf222cdbad7bad3b8c13a80562f69049fab30aa13afed4e5c7a0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page