Log your ml training in the console in an attractive way.
Project description
LoggerML - Rich machine learning logger in the console
Log your Machine Learning training in the console in a beautiful way using rich✨ with useful information but with minimal code.
Documentation here
Installation
In a new virtual environment, install simply the package via pipy:
pip install loggerml
This package is supported on Linux, macOS and Windows. It is also supported on Jupyter Notebooks.
Quick start
Minimal usage
Integrate the LogML logger in your training loops! For instance for 4 epochs and 20 batches per epoch:
import time
from logml import Logger
logger = Logger(n_epochs=4, n_batches=20)
for _ in range(4):
for _ in logger.tqdm(range(20)):
time.sleep(0.1) # Simulate a training step
# Log whatever you want (int, float, str, bool):
logger.log({
'loss': 0.54321256,
'accuracy': 0.85244777,
'loss name': 'MSE',
'improve baseline': True,
})
Yields:
)
Note that the expected remaining time of the overall train is displayed as well as the one for the epoch. The logger also provides also the possibility to average the logged values over an epoch or a full training.
Save the logs
In Linux you can use tee
to save the logs in a file and display them in the console.
However you need to use unbuffer
to keep the style:
unbuffer python main.py --color=auto | tee output.log
See here for details.
Advanced usage
Now you can add a validation logger, customize the logger with your own styles and colors, compute the average of some values over batch, add a dynamic message at each batch, update the value only every some batches and more!
At initialization you can set default configuration for the logger that will be
eventually overwritten by the configuration passed to the log
method.
An example with more features:
train_logger = Logger(
n_epochs=2,
n_batches=20,
log_interval=2,
name='Training',
name_style='dark_orange',
styles='yellow', # Default style for all values
sizes={'accuracy': 4}, # only 4 characters for 'accuracy'
average=['loss'], # 'loss' will be averaged over the current epoch
bold_keys=True, # Bold the keys
)
val_logger = Logger(
n_epochs=2,
n_batches=10,
name='Validation',
name_style='cyan',
styles='blue',
bold_keys=True,
show_time=False, # Remove the time bar
)
for _ in range(2):
train_logger.new_epoch() # Manually declare a new epoch
for _ in range(20):
train_logger.new_batch() # Manually declare a new batch
time.sleep(0.1)
# Overwrite the default style for "loss" and add a message
train_logger.log(
{'loss': 0.54321256, 'accuracy': 85.244777},
styles={'loss': 'italic red'},
message="Training is going well?\nYes!",
)
val_logger.new_epoch()
for _ in range(10):
val_logger.new_batch()
time.sleep(0.1)
val_logger.log({'val loss': 0.65422135, 'val accuracy': 81.2658775})
val_logger.detach() # End the live display to print something else after
Yields:
Don't know the number of batches in advance?
If you don't have the number of batches in advance, you can initialize the logger
with n_batches=None
. Only the available information will be displayed. For instance
with the configuration of the first example:
The progress bar is replaced by a cyclic animation. The eta times are not know at the first epoch but was estimated after the second epoch.
Note that if you use Logger.tqdm(dataset)
and the dataset has a length, the number of
batches will be automatically set to the length of the dataset.
How to contribute
For development, install the package dynamically and dev requirements with:
pip install -e .
pip install -r requirements-dev.txt
Everyone can contribute to LogML, and we value everyone’s contributions. Please see our contributing guidelines for more information 🤗
Todo
To do:
Done:
- Allow multiple logs on the same batch
- Finalize tests for 1.0.0 major release
- Add docs sections: comparison with tqdm and how to use mean_vals (with exp tracker)
- Use regex for
styles
,sizes
andaverage
keys - Be compatible with notebooks
- Get back the cursor when interrupting the training
-
logger.tqdm()
feature (used liketqdm.tqdm
) - Doc with Sphinx
- Be compatible with Windows and Macs
- Manage a validation loop (then multiple loggers)
- Add color customization for message, epoch/batch number and time
License
Copyright (C) 2023 Valentin Goldité
This program is free software: you can redistribute it and/or modify it under the terms of the MIT License. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
This project is free to use for COMMERCIAL USE, MODIFICATION, DISTRIBUTION and PRIVATE USE as long as the original license is include as well as this copy right notice at the top of the modified files.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file loggerml-1.1.3.tar.gz
.
File metadata
- Download URL: loggerml-1.1.3.tar.gz
- Upload date:
- Size: 838.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.12.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 43cb8eaa6e7db073845f9379c8f110535af97545f6326ea7484aba307ba03b56 |
|
MD5 | b460b83983d8fb14d511de6283197325 |
|
BLAKE2b-256 | c0c683256f965cbb8fa26b87004d2dbe835dff7ee3797ef4e21f5ad2cc1c4285 |
File details
Details for the file loggerml-1.1.3-py3-none-any.whl
.
File metadata
- Download URL: loggerml-1.1.3-py3-none-any.whl
- Upload date:
- Size: 12.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.12.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c8ffaee467bef9f70bb530f2159fdb335b8f12bcdd0f4e3477367153d662f595 |
|
MD5 | 7ecc13e039d4320220c60a9cdb5cdb29 |
|
BLAKE2b-256 | 03d95cf756dc446da8ae1e8af4610d40db70c99c67931615d75608992ce134ac |