Skip to main content

An experimental framework to run pytorch experiments

Project description

## firelab (version 0.0.20) ### About Framework for running DL experiments with pytorch. Provides the following useful stuff: - parallel hyperparameters optimization - allows to start/continue your experiment with easy commands from yml config file - easier to save checkpoints, write logs and visualize training - useful utils for HP tuning and working with pytorch (look them up in

### Installation ` pip install firelab `

### Future plans [ ] Run in daemon. [ ] Implement firelab ls command [ ] Easier profiling (via contexts?) [ ] There are some interseting features in [ ] Add commit hash to summary [ ] Create new branch/commit for each experiment? [ ] More meaningful error messages. [ ] Does model release GPU after training is finished (when we do not use HPO)? [ ] Proper handling of errors in HPO: should we fail on the first exception? Should we try/catch result.get() in process pool? [x] Make trainers run without config.firelab, this will make it possible to run trainer from python [ ] Does continue_from_iter work?

### Useful commands: - firelab ls — lists all running experiments - firelab start / firelab stop / firelab pause / firelab continue — starts/stops/pauses/continues experiments

### Useful classes - BaseTrainer — controls the experiment: loads data, runs/stops training, performs logging, etc

Cool staff firelab can do: - Reduces amount of boilerplate code you write for training/running experiments - Keep all experiment arguments and hyperparameters in a expressive config files - Visualize your metrics with tensorboard through [tensorboardX]( - Save checkpoints and logs with ease. - Fixes random seeds for you by default (in numpy, pytorch and random). Attention: if you use other libs with other random generators, you should fix random seeds by yourself (we recommend taking it from hyperparams)

### Usage: #### Configs Besides your own configs, firelab adds its inner staff, which you can use or change as hyperparameter: - name of the experiment - random_seed

Experiment name determines where config is. Experiment name can’t duplicate each other.

### TODO - Interactive config builder - Clone experiment/config - Add examples with several trainers in them - Why do we pass both exp_dir and exp_name everywhere in We should care only about exp_path I suppose? - Looks like we do not need the dublicating logic of directories creation in anymore since it is in BaseTrainer - Rename BaseTrainer into Trainer?

Project details

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Files for firelab, version 0.0.20
Filename, size File type Python version Upload date Hashes
Filename, size firelab-0.0.20-py3-none-any.whl (20.3 kB) File type Wheel Python version py3 Upload date Hashes View
Filename, size firelab-0.0.20.tar.gz (17.7 kB) File type Source Python version None Upload date Hashes View

Supported by

AWS AWS Cloud computing Datadog Datadog Monitoring DigiCert DigiCert EV certificate Facebook / Instagram Facebook / Instagram PSF Sponsor Fastly Fastly CDN Google Google Object Storage and Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Salesforce Salesforce PSF Sponsor Sentry Sentry Error logging StatusPage StatusPage Status page