Skip to main content

HyperFetch. A tool to optimize and fetch hyperparameters for your reinforcement learning application.

Project description

HyperFetch

HyperFetch is a tool consisting of:

  • Website for fetching hyperparameters that are tuned by others
  • Pip-module for tuning hyperparameters

The intention of HyperFetch is to:

  • Make recreation of existing projects easier within the reinforcement learning research community.
  • Allow beginners to train and implement their own reinforcement learning models easier due to abstracting away the advanced tuning-step.

The tool is expected to aid in decreasing CO2-emissions related to tuning hyperparameters when training RL models.

This is expected to be done by posting tuned algorithm x environment combinations to the websitesuch that:

  • Developers/Students can access hyperparameters that have been optimially tuned before instead of having to tune them themselves.
  • Researchers can filter by project on the website and access hyperparameters they wish to recreate/replicate for their own research.

The persistance endpoints opens up to the user through this package. To access/fetch hyperparameters optimized by other RL-practicioners, have a look at the HyperFetch website.

Content

Prerequisites

Box2D-py swig

Links

Repository: HyperFetch Github
Documentation: HyperFetch Website

Using the pip module

To use the pip model please do the following:

  1. Create a virtual environment in your favorite IDE. The virtual environment must be of the type virtualenv.

Install virtualenv if you haven't

    pip install virtualenv

Create a virtual environment

    virtualenv [some_name]

Activate virtualenv this way if using windows:

   # In cmd.exe
   venv\Scripts\activate.bat
   # In PowerShell
   venv\Scripts\Activate.ps1

Activate virtualenv this way if using Linux/MacOS:

    $ source myvenv/bin/activate
  1. Install the pip-module.

     # pip install hyperfetch
    

Example 1: tuning + posting using HyperFetch

Here is a quick example of how to tune and run PPO in the LunarLander-v2 environment inside your new or existing project:

Just a reminder:

The pip package must be installed before this can be done. To install the pip-package, the steps to get the front -or backend started/running do not need to be done.
For details, see using the pip module.

1. Create configuartion YAML file (minimal example)

# Required (example values)
alg: ppo
env: LunarLander-v2
project_name: some_project
git_link: github.com/user/some_project

# Some other useful parameters
sampler: tpe
tuner: median
n_trials: 20
log_folder: logs

2. Tune using python file or command line

from hyperfetch import tuning

# Path to your YAML config file 
config_path = "../some_folder/config_name.yml"

# Writes each trial's best hyperparameters to log folder
tuning.tune(config_path)

Command line:

If in the same directory as the config file and the config file is called "config.yml"

  tune config.yml

Enjoy your hyperparameters!

Example 2: Posting hyperparameters that are not tuned by Hyperfetch

Just a reminder:

The pip package must be installed before this can be done. To install the pip-package, the steps to get the front -or backend started/running do not need to be done.
For details, see using the pip module.

1. Create configuartion YAML file

# Required (example values)
alg: dqn
env: LunarLander-v2
project_name: some_project
git_link: github.com/user/some_project
hyperparameters: # These depend on the choice of algorithm
  batch_size: 256
  buffer_size: 50000
  exploration_final_eps: 0.10717928118310233
  exploration_fraction: 0.3318973226098944
  gamma: 0.9
  learning_rate: 0.0002126832542803243
  learning_starts: 10000
  net_arch: medium
  subsample_steps: 4
  target_update_interval: 1000
  train_freq: 8
  
# Not required (but appreciated)
CO2_emissions: 0.78 #kgs
energy_consumed: 3.27 #kWh
cpu_model: 12th Gen Intel(R) Core(TM) i5-12500H
gpu_model: NVIDIA GeForce RTX 3070
total_time: 0:04:16.842800 # H:M:S:MS

2. Save/post using python file or command line

Python file:

from hyperfetch import tuning

# Path to your YAML config file 
config_path = "../some_folder/config_name.yml"

# Writes each trial's best hyperparameters to log folder
tuning.save(config_path)

Command line:

If in the same directory as the config file and the config file is called "config.yml"

  save config.yml

Getting the website up and running

Installation backend

Make sure you have

  • Pip version 23.0.1 or higher
  • Python 3.10
  • virtualenv (not venv) Clone this repository by either:
  1. Open git bash

  2. Change the current working directory to the location where you want the cloned directory.

  3. Paste this snip:

     git clone https://github.com/YOUR-USERNAME/YOUR-REPOSITORY
    
  4. Install virtualenv if you haven't

     pip install virtualenv
    
  5. Create a virtual environment

     virtualenv [some_name]
    

    Activate virtualenv this way if using windows:

    # In cmd.exe
    venv\Scripts\activate.bat
    # In PowerShell
    venv\Scripts\Activate.ps1
    

    Activate virtualenv this way if using Linux/MacOS:

     $ source myvenv/bin/activate
    
  6. Press Enter to create your local clone.

     Cloning into 'hyperFetch'...
     remote: Enumerating objects: 466, done.
     remote: Counting objects: 100% (466/466), done.
     remote: Compressing objects: 100% (238/238), done.
     remote: Total 466 (delta 221), reused 438 (delta 200), pack-reused 0
     Receiving objects: 100% (466/466), 4.17 MiB | 10.29 MiB/s, done.
     Resolving deltas: 100% (221/221), done.
    
  7. You may now change directory by writing into the terminal:

     cd hyperfetch
    
  8. Then, install the dependencies into your virtual environment

      pip install -r requirements.txt
    

Start up backend

After cloning and installing, you can finally start up the server!

      uvicorn main:app --reload   

Installation frontend

The frontend-branch is inside of the same project. However, because the frontend-branch (frontend) and backend-branch (master) must run at the same time to serve the website, the project must be cloned twice into two different local respositories.

  1. Follow stages 3-6 in installation backend This includes:

    • Move into another working directory
    • Clone the project
    • Create a new virtualenv
    • Activate the virtualenv
  2. The frontend-branch does not exist locally and must be fetched remotely. In the terminal, type:

    git switch frontend
    
  3. Enter the correct folder

    cd src
    
  4. Install dependencies. This will creat a node_modules folder in your local repository.

    npm install
    

Start up frontend

  1. To serve the website (dev mode), run:

    npm run dev
    
  2. Click the link that appears in the terminal, or access your browser of choice and type in:

    http://localhost:5173/
    
  3. Good luck!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

hyperfetch-1.0.9.tar.gz (20.7 kB view details)

Uploaded Source

Built Distribution

hyperfetch-1.0.9-py3-none-any.whl (19.0 kB view details)

Uploaded Python 3

File details

Details for the file hyperfetch-1.0.9.tar.gz.

File metadata

  • Download URL: hyperfetch-1.0.9.tar.gz
  • Upload date:
  • Size: 20.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.9

File hashes

Hashes for hyperfetch-1.0.9.tar.gz
Algorithm Hash digest
SHA256 64cbee651232603cb79f66e4c91c5b7a307b8b7d7d929ed1cdcc3d65ac9ab3ac
MD5 1edfae5e427f6a2a8cf3a858ddb72924
BLAKE2b-256 52639980e7ef444fa9cc15631fb1c0c24b36ec153c589ca9c0b399d82de0be31

See more details on using hashes here.

File details

Details for the file hyperfetch-1.0.9-py3-none-any.whl.

File metadata

  • Download URL: hyperfetch-1.0.9-py3-none-any.whl
  • Upload date:
  • Size: 19.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.9

File hashes

Hashes for hyperfetch-1.0.9-py3-none-any.whl
Algorithm Hash digest
SHA256 e8f7a0d7301771e51f98b46f84a1335317c8b406a12ac81c7a5f3e74961f2d75
MD5 64d52831031602948d5a8cec814426cc
BLAKE2b-256 81f11d8a3fade91219072757f23fe2be3ab4e8e8e6cf3ed12417d21186d35a03

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page