Skip to main content

A minimalist package for logging best values of metrics when training models with PyTorch

Project description

torch_logger 🔥

PyPI - Python Version

This minimalist package serves to log best values of performance metrics during the training of PyTorch models. The idea is to automatically log the best value for each tracked metric such that it can be directly analyzed downstream (e.g. when using wandb) without the need to post-process the raw logged values to identify the overall best values and corresponding steps.

Usage:

>>> from torch_logger import BestValueLogger
>>> bv_log = BestValueLogger(
        {'val_loss': False, 'val_roc': True} # <-- provide flag if larger is better
    )

Log values after each eval step:

    ... 
>>> bv_log([val_loss, val_roc], step=0)
    ... 
>>> bv_log([val_loss, val_roc], step=1)
    ...  
>>> bv_log([val_loss, val_roc], step=2)

Inspect the logger:

>>> bv_log

::BestValueLogger::
Tracking the best values of the following metrics:
{
    "val_loss": false,
    "val_roc": true
}
(key: metric, value: bool if larger is better)
Best values and steps:
{
    "best_val_loss_value": 0.05,
    "best_val_loss_step": 2,
    "best_val_roc_value": 0.8,
    "best_val_roc_step": 1
}

Update your experiment logger (e.g. wandb) with best_values at the end of training

>>> wandb.log( bv_log.best_values ) 

Logging values without steps

In case you only wish to track values but not the corresponding steps, run:

>>> bvl = BestValueLogger({'val_loss': False, 'val_roc':True}, log_step=False)

Populate logger with metrics:

>>> bvl([0.2,0.8], step=1)
>>> bvl([0.2,0.9], step=2)

Inspect:

>>> bvl
::BestValueLogger::
Tracking the best values of the following metrics:
{
    "val_loss": false,
    "val_roc": true
}
(key: metric, value: bool if larger is better)
Best values:
{
    "best_val_loss_value": 0.2,
    "best_val_roc_value": 0.9
}

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torch_logger-0.1.1.tar.gz (2.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

torch_logger-0.1.1-py3-none-any.whl (3.9 kB view details)

Uploaded Python 3

File details

Details for the file torch_logger-0.1.1.tar.gz.

File metadata

  • Download URL: torch_logger-0.1.1.tar.gz
  • Upload date:
  • Size: 2.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.10.3 Darwin/20.6.0

File hashes

Hashes for torch_logger-0.1.1.tar.gz
Algorithm Hash digest
SHA256 27688fd98ec07a5d7dddc3ef4bb2040fdf3ef2524d1ddc39b9c038169f1bffc8
MD5 59e292fab150eff7458a561b2e93e778
BLAKE2b-256 fb4f6e4e6c34d953c490d5b695a3cf81aaafdf5b410f4d8bc2982a2f2d915427

See more details on using hashes here.

File details

Details for the file torch_logger-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: torch_logger-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 3.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.10.3 Darwin/20.6.0

File hashes

Hashes for torch_logger-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4b703f71fd7f8f08d79c3b3df3e29a2687131184e592fbd1a93c73500579971e
MD5 e12f778f3f4e91904b04b102d9dc0194
BLAKE2b-256 f72c1131a25c7eabd0151d44a548ab03560599a58ac8ce562689aabbe2cb7155

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page