Skip to main content
Donate to the Python Software Foundation or Purchase a PyCharm License to Benefit the PSF! Donate Now

No project description provided

Project description

= torchtest

A Tiny Test Suite for pytorch based Machine Learning models, inspired by https://github.com/Thenerdstation/mltest/blob/master/mltest/mltest.py[mltest].
Chase Roberts lists out 4 basic tests in his https://medium.com/@keeper6928/mltest-automatically-test-neural-network-models-in-one-function-call-eb6f1fa5019d[medium post] about mltest.
torchtest is mostly a pytorch port of mltest(which was written for tensorflow).

== Installation

[source, bash]
----
pip install --upgrade torchtest
----

== Tests


[source, python]
----
# imports for examples
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
----


=== Variables Change

[source, python]
----
from torchtest import assert_vars_change

inputs = Variable(torch.randn(20, 20))
targets = Variable(torch.randint(0, 2, (20,))).long()
batch = [inputs, targets]
model = nn.Linear(20, 2)

# what are the variables?
print('Our list of parameters', [ np[0] for np in model.named_parameters() ])

# do they change after a training step?
# let's run a train step and see
assert_vars_change(
model=model,
loss_fn=F.cross_entropy,
optim=torch.optim.Adam(model.parameters()),
batch=batch)
----

[source, python]
----
""" FAILURE """
# let's try to break this, so the test fails
params_to_train = [ np[1] for np in model.named_parameters() if np[0] is not 'bias' ]
# run test now
assert_vars_change(
model=model,
loss_fn=F.cross_entropy,
optim=torch.optim.Adam(params_to_train),
batch=batch)

# YES! bias did not change
----


=== Variables Don't Change

[source, python]
----
from torchtest import assert_vars_same

# What if bias is not supposed to change, by design?
# test to see if bias remains the same after training
assert_vars_same(
model=model,
loss_fn=F.cross_entropy,
optim=torch.optim.Adam(params_to_train),
batch=batch,
params=[('bias', model.bias)]
)
# it does? good. let's move on
----

=== Output Range

[source, python]
----
from torchtest import test_suite

# NOTE : bias is fixed (not trainable)
optim = torch.optim.Adam(params_to_train)
loss_fn=F.cross_entropy

test_suite(model, loss_fn, optim, batch,
output_range=(-2, 2),
test_output_range=True
)

# seems to work
----

[source, python]
----
""" FAILURE """
# let's tweak the model to fail the test
model.bias = nn.Parameter(2 + torch.randn(2, ))

test_suite(
model,
loss_fn, optim, batch,
output_range=(-1, 1),
test_output_range=True
)

# as expected, it fails; yay!
----

=== NaN Tensors

[source, python]
----
""" FAILURE """
model.bias = nn.Parameter(float('NaN') * torch.randn(2, ))

test_suite(
model,
loss_fn, optim, batch,
test_nan_vals=True
)
----

=== Inf Tensors

[source, python]
----
""" FAILURE """
model.bias = nn.Parameter(float('Inf') * torch.randn(2, ))

test_suite(
model,
loss_fn, optim, batch,
test_inf_vals=True
)
----


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Filename, size & hash SHA256 hash help File type Python version Upload date
torchtest-0.5-py3-none-any.whl (6.8 kB) Copy SHA256 hash SHA256 Wheel py3
torchtest-0.5.tar.gz (4.7 kB) Copy SHA256 hash SHA256 Source None

Supported by

Elastic Elastic Search Pingdom Pingdom Monitoring Google Google BigQuery Sentry Sentry Error logging AWS AWS Cloud computing DataDog DataDog Monitoring Fastly Fastly CDN SignalFx SignalFx Supporter DigiCert DigiCert EV certificate StatusPage StatusPage Status page