Skip to main content

A progress bar that aggregates the values of each iteration.

Project description

dlprog

Deep Learning Progress

PyPI


A Python library for progress bars with the function of aggregating each iteration's value.
It helps manage the loss of each epoch in deep learning or machine learning training.

demo

Installation

pip install dlprog

General Usage

Setup

from dlprog import Progress
prog = Progress()

Example

import random
import time
n_epochs = 3
n_iter = 10

prog.start(n_epochs=n_epochs, n_iter=n_iter, label='value') # Initialize start time and epoch.
for _ in range(n_epochs):
    for _ in range(n_iter):
        time.sleep(0.1)
        value = random.random()
        prog.update(value) # Update progress bar and aggregate value.

Output

1/3: ######################################## 100% [00:00:01.05] value: 0.45692 
2/3: ######################################## 100% [00:00:01.05] value: 0.48990 
3/3: ######################################## 100% [00:00:01.06] value: 0.56601 

Get each epoch's value

>>> prog.values
[0.4569237062691406,
 0.4898950231979676,
 0.5660061074197436]

In machine learning training

Setup.
train_progress function is a shortcut for Progress class. Return a progress bar that is suited for machine learning training.

from dlprog import train_progress
prog = train_progress()

Example. Case of training a deep learning model with PyTorch.

n_epochs = 3
n_iter = len(dataloader)

prog.start(n_epochs=n_epochs, n_iter=n_iter)
for _ in range(n_epochs):
    for x, label in dataloader:
        optimizer.zero_grad()
        y = model(x)
        loss = criterion(y, label)
        loss.backward()
        optimizer.step()
        prog.update(loss.item())

Output

1/3: ######################################## 100% [00:00:03.08] loss: 0.34099 
2/3: ######################################## 100% [00:00:03.12] loss: 0.15259 
3/3: ######################################## 100% [00:00:03.14] loss: 0.10684 

If you want to obtain weighted exact values considering batch size:

prog.update(loss.item(), weight=len(x))

Advanced usage

Advanced arguments, functions, etc.
Also, see API Reference if you want to know more.

leave_freq

Argument that controls the frequency of leaving the progress bar.

n_epochs = 12
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, leave_freq=4)
for _ in range(n_epochs):
    for _ in range(n_iter):
        time.sleep(0.1)
        value = random.random()
        prog.update(value)

Output

 4/12: ######################################## 100% [00:00:01.06] loss: 0.34203 
 8/12: ######################################## 100% [00:00:01.05] loss: 0.47886 
12/12: ######################################## 100% [00:00:01.05] loss: 0.40241 

unit

Argument that multiple epochs as a unit.

n_epochs = 12
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, unit=4)
for _ in range(n_epochs):
    for _ in range(n_iter):
        time.sleep(0.1)
        value = random.random()
        prog.update(value)

Output

  1-4/12: ######################################## 100% [00:00:04.21] value: 0.49179 
  5-8/12: ######################################## 100% [00:00:04.20] value: 0.51518 
 9-12/12: ######################################## 100% [00:00:04.18] value: 0.54546 

Add note

You can add a note to the progress bar.

n_iter = 10
prog.start(n_iter=n_iter, note='This is a note')
for _ in range(n_iter):
    time.sleep(0.1)
    value = random.random()
    prog.update(value)

Output

1: ######################################## 100% [00:00:01.05] 0.58703, This is a note 

You can also add a note when update() as note argument.
Also, you can add a note when end of epoch usin memo() if defer=True.

n_epochs = 3
prog.start(
    n_epochs=n_epochs,
    n_iter=len(trainloader),
    label='train_loss',
    defer=True,
    width=20,
)
for _ in range(n_epochs):
    for x, label in trainloader:
        optimizer.zero_grad()
        y = model(x)
        loss = criterion(y, label)
        loss.backward()
        optimizer.step()
        prog.update(loss.item())
    test_loss = eval_model(model)
    prog.memo(f'test_loss: {test_loss:.5f}')

Output

1/3: #################### 100% [00:00:02.83] train_loss: 0.34094, test_loss: 0.18194 
2/3: #################### 100% [00:00:02.70] train_loss: 0.15433, test_loss: 0.12987 
3/3: #################### 100% [00:00:02.79] train_loss: 0.10651, test_loss: 0.09783 

Multiple values

If you want to aggregate multiple values, set n_values and input values as a list.

n_epochs = 3
n_iter = 10
prog.start(n_epochs=n_epochs, n_iter=n_iter, n_values=2)
for _ in range(n_epochs):
    for _ in range(n_iter):
        time.sleep(0.1)
        value1 = random.random()
        value2 = random.random() * 10
        prog.update([value1, value2])

Output

1/3: ######################################## 100% [00:00:01.05] 0.47956, 4.96049 
2/3: ######################################## 100% [00:00:01.05] 0.30275, 4.86003 
3/3: ######################################## 100% [00:00:01.05] 0.43296, 3.31025 

You can input multiple labels as a list instead of n_values.

prog.start(n_iter=n_iter, label=['value1', 'value2'])

Default attributes

Progress object keeps constructor arguments as default attributes.
These attributes are used when not specified in start().

Attributes specified in start() is used preferentially while this running (until next start() or reset()).

If a required attribute (n_iter) has already been specified, start() can be skipped.

Version History

1.0.0 (2023-07-13)

  • Add Progress class.
  • Add train_progress function.

1.1.0 (2023-07-13)

  • Add values attribute.
  • Add leave_freq argument.
  • Add unit argument.

1.2.0 (2023-09-24)

  • Add note argument, memo() method, and defer argument.
  • Support multiple values.
  • Add round argument.
  • Support changing separator strings.
  • Support skipping start().
  • Write API Reference.
  • Other minor adjustments.

1.2.1 (2023-09-25)

  • Support note=None in memo().
  • Change timing of note reset from epoch_reset to bar_reset.

1.2.2 (2023-09-25)

  • Fix bug that not set note=None defaultly in memo().

1.2.3 (2023-11-28)

  • Fix bug that argument label is not available when with_test=True in train_progress().

1.2.4 (2023-11-29, Latest)

  • Fix bug that argument width is not available when with_test=True in train_progress().

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dlprog-1.2.4.tar.gz (9.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

dlprog-1.2.4-py3-none-any.whl (8.2 kB view details)

Uploaded Python 3

File details

Details for the file dlprog-1.2.4.tar.gz.

File metadata

  • Download URL: dlprog-1.2.4.tar.gz
  • Upload date:
  • Size: 9.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.0

File hashes

Hashes for dlprog-1.2.4.tar.gz
Algorithm Hash digest
SHA256 78739fc8b481795f3fb9901c84e9e192e204ba1f5340c3b296c12b15bdfcec3f
MD5 1235ff16471538cde0eb7e65902b2474
BLAKE2b-256 d3a00f54416134c17a6c86e8bd28ce4112230e8755192ee884e9a6b7423d7a7a

See more details on using hashes here.

File details

Details for the file dlprog-1.2.4-py3-none-any.whl.

File metadata

  • Download URL: dlprog-1.2.4-py3-none-any.whl
  • Upload date:
  • Size: 8.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.0

File hashes

Hashes for dlprog-1.2.4-py3-none-any.whl
Algorithm Hash digest
SHA256 c57931895eb7c8e1df9c6df00d1d77cee199ee15e83932d954748b7512aae364
MD5 5d495bd5de80a07b292cb5ba2a8c51e3
BLAKE2b-256 a6ea089fee628d0e582097c81745bc5d7ecc1867309e58dfed8b1125de9afa79

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page