Skip to main content

Lockout: Sparse Regularization of Neural Networks

Project description

Lockout

PyPI Version

Sparsity Inducing Regularization of Neural Networks

Install

pip install lockout [-- upgrade]

Usage

PyTorch installation required.

1. Neural Network Architecture

To modify the architecture of the neural network change either:

  • The number of input features: n_features
  • The number of layers: len(layer_sizes)
  • The number of nodes in the ith layer: layer_sizes[i]
from lockout.pytorch_utils import FCNN

n_features  = 100       
layer_sizes = [10, 1]   
model_init  = FCNN(n_features, layer_sizes)

2. Create PyTorch DataLoaders

Previous preprocessing and partitioning of the data is assumed.

from lockout.pytorch_utils import make_DataLoaders

dl_train, dl_valid, dl_test = make_DataLoaders(xtrain, xvalid, xtest, ytrain, yvalid, ytest)

3. Unconstrained Training

Modify the following hyperparameters according to your particular problem:

  • lr: Learning rate
  • loss_type: Type of loss function
    • loss_type=1 (Mean Squared Error)
    • loss_type=2 (Mean Cross Entropy)
  • optim_id: Optimizer
    • optim_id = 1: Stochastic Gradient Descend
    • optim_id = 2: Adam
  • epochs: Maximum number of epochs during training
  • early_stopping: Number of epochs used in the convergence condition
  • tol_loss: Maximum change in the training loss function used in the convergence condition
  • reset_weights: Whether or not to reset weights before starts training
from lockout import Lockout

lr = 1e-2
loss_type = 1
optim_id  = 1

# Instantiate Lockout
lockout_forward = Lockout(model_init, 
                          lr=lr, 
                          loss_type=loss_type, 
                          optim_id=optim_id)

# Train Neural Network Without Regularization
lockout_forward.train(dl_train, dl_valid, 
                      train_how="unconstrained",
                      epochs=10000,
                      early_stopping=20,
                      tol_loss=1e-6,
                      reset_weights=True)

The model at the validation minimum and the unconstrained model can be retrieved and saved for further use.

from lockout.pytorch_utils import save_model

# Save Unconstrained Model
model_forward_unconstrained = lockout_forward.model_last
save_model(model_forward_unconstrained, 'model_forward_unconstrained.pth')

# Save Model At Validation Minimum
model_forward_best = lockout_forward.model_best_valid
save_model(model_forward_best, 'model_forward_best.pth')

Loss and accuracy curves can be retrieved for analysis or graphing. For regression problems, R2 is computed as the accuracy.

df0 = lockout_forward.path_data
df0.head()

import matplotlib.pyplot as plt
import numpy as np

fig, axes = plt.subplots(figsize=(9,6))
axes.plot(df0["iteration"], df0["train_loss"], label="Training", linewidth=4)
axes.plot(df0["iteration"], df0["valid_loss"], label="Validation", linewidth=4)
axes.legend(fontsize=16)
axes.set_xlabel("iteration", fontsize=16)
axes.set_ylabel("Loss Function", fontsize=16)
axes.tick_params(axis='both', which='major', labelsize=14)
axes.set_title("Unconstrained", fontsize=16)
axes.grid(True, zorder=2)
plt.show()

4. Lockout Training: Option 1

Within this option, the network is first trained until the regularization path is found (Path 1). Then, the constraint t0 is iteratively decreased during training with a stepsize Δt0 inversely proportional to the number of epochs (Path 2). A small Δt0 is necessary to stay on the regularization path.
Modify the following hyperparameters according to your particular problem:

  • input_model: input model, either unconstrained or at validation minimum
  • regul_type: list of tuples (or dictionary) of the form [(layer_name, regul_id)] where:
    • layer_name: layer name in the input model (string)
    • regul_id = 1: L1 regularization
    • regul_id = 2: Log regularization (see get_constraint function)
  • regul_path: list of tuples (or dictionary) of the form [(layer_name, path_flg)] where:
    • path_flg = True: the constraint t0 will be iteratively decreased in this layer
    • path_flg = False: the constraint t0 will be kept constant in this layer
  • epochs: maximum number of epochs used to bring the network to the regularization path (Path 1)
  • epochs2: maximum number of epochs used while training decreasing t0 (Path 2)
from lockout import Lockout

regul_type = [('linear_layers.0.weight', 1)]
regul_path = [('linear_layers.0.weight', True)]

# Instantiate Lockout
lockout_option1 = Lockout(lockout_forward.model_best_valid,
                          lr=1e-2, 
                          loss_type=1,
                          regul_type=regul_type,
                          regul_path=regul_path)

# Train Neural Network With Lockout
lockout_option1.train(dl_train, dl_valid, 
                      train_how="decrease_t0", 
                      epochs=5000,
                      epochs2=20000,
                      early_stopping=20, 
                      tol_loss=1e-5)

The model at the validation minimum can be retrieved and saved for further use.

from lockout.pytorch_utils import save_model

# Save Model At Validation Minimum
model_lockout_option1 = lockout_option1.model_best_valid
save_model(model_lockout_option1, 'model_lockout_option1.pth')

Path data can be retrieved for analysis or graphing.

df1 = lockout_forward.path_data
df1.head()

Test accuracy can be computed using the models previously trained.

import torch
from lockout.pytorch_utils import dataset_r2

device = torch.device('cpu')
r2_test_forward, _  = dataset_r2(dl_test, model_forward_best, device)
r2_test_lockout1, _ = dataset_r2(dl_test, model_lockout_option1, device)
print("Test R2 (unconstrained) = {:.3f}".format(r2_test_forward))
print("Test R2 (lockout)       = {:.3f}".format(r2_test_lockout1))

Feature importance can be computed and graphed.

import matplotlib.pyplot as plt
import numpy as np
from lockout.pytorch_utils import get_features_importance

importance = get_features_importance(model_lockout_option1, 'linear_layers.0.weight')

fig, axes = plt.subplots(figsize=(9,6))
x_pos = np.arange(len(importance))
axes.bar(x_pos, importance, zorder=2)
axes.set_xticks(x_pos)
axes.set_xticklabels(importance.index, rotation='vertical')
axes.set_xlim(-1,len(x_pos))
axes.tick_params(axis='both', which='major', labelsize=14)
axes.set_ylabel('Importance', fontsize=16)
axes.set_xlabel('feature', fontsize=16)
axes.set_title('Lockout', fontsize=16)
axes.grid(True, zorder=1)
plt.show()

5. Lockout Training: Option 2

Wihtin this option, a discrete set of t0 values is sampled. They can be entered as a 1D tensor.
Modify the following hyperparameters according to your particular problem:

  • t0_grid: List of tuples (or dictionary) of the form [(layer_name, t0_sampled)] where:
    • layer_name: layer name in the input model (string)
    • t0_sampled: 1D tensor with the constraint values t0 to be sampled in the layer
  • epochs: maximum number of epochs used for the first t0 value, t0_sampled[0]
  • epochs2: maximum number of epochs used for the rest of the t0 values, t0_sampled[1:]
from lockout import Lockout

regul_type = [('linear_layers.0.weight', 1)]
regul_path = [('linear_layers.0.weight', True)]

t0_sampled = torch.from_numpy(np.geomspace(53.504620, 1e-3, num=100, endpoint=True))
t0_grid    = {'linear_layers.0.weight': t0_sampled}

# Instantiate Lockout
lockout_option2a = Lockout(lockout_forward.model_best_valid,
                          lr=1e-2, 
                          loss_type=1,
                          regul_type=regul_type,
                          regul_path=regul_path, 
                          t0_grid=t0_grid)

# Train Neural Network With Lockout
lockout_option2a.train(dl_train, dl_valid, 
                      train_how="sampling_t0", 
                      epochs=5000,
                      epochs2=200,
                      early_stopping=20, 
                      tol_loss=1e-4)

All the functionalities described above can be used here, including retrieving the model at the validation minimum and computing feature importance.

import matplotlib.pyplot as plt
import numpy as np
from lockout.pytorch_utils import save_model, get_features_importance

model_lockout_option2a = lockout_option2a.model_best_valid
save_model(model_lockout_option2a, 'model_lockout_option2a.pth')

importance = get_features_importance(model_lockout_option2a, 'linear_layers.0.weight')

fig, axes = plt.subplots(figsize=(9,6))
x_pos = np.arange(len(importance))
axes.bar(x_pos, importance, zorder=2)
axes.set_xticks(x_pos)
axes.set_xticklabels(importance.index, rotation='vertical')
axes.set_xlim(-1,len(x_pos))
axes.tick_params(axis='both', which='major', labelsize=14)
axes.set_ylabel('Importance', fontsize=16)
axes.set_xlabel('feature', fontsize=16)
axes.set_title('Lockout', fontsize=16)
axes.grid(True, zorder=1)
plt.show()

Alternatively, the discrete set of t0 values can be generated internally, in which case they are linearly sampled.
Modify the following hyperparameters according to your particular problem:

  • t0_points: list of tuples (or dictionary) of the form [(layer_name, t0_number)] where:
    • layer_name: layer name in the input model (string)
    • t0_number: number of constraint values t0 to be linearly sampled (integer)
from lockout import Lockout

regul_type = [('linear_layers.0.weight', 1)]
regul_path = [('linear_layers.0.weight', True)]
t0_points  = {'linear_layers.0.weight': 200}

# Instantiate Lockout
lockout_option2b = Lockout(lockout_forward.model_best_valid,
                          lr=1e-2, 
                          loss_type=1,
                          regul_type=regul_type,
                          regul_path=regul_path, 
                          t0_points=t0_points)

# Train Neural Network With Lockout
lockout_option2b.train(dl_train, dl_valid, 
                      train_how="sampling_t0", 
                      epochs=5000,
                      epochs2=200,
                      early_stopping=20, 
                      tol_loss=1e-4)

Paper

https://arxiv.org/abs/2107.07160

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lockout-0.1.5.tar.gz (14.7 MB view details)

Uploaded Source

Built Distribution

lockout-0.1.5-py2.py3-none-any.whl (17.5 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file lockout-0.1.5.tar.gz.

File metadata

  • Download URL: lockout-0.1.5.tar.gz
  • Upload date:
  • Size: 14.7 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-requests/2.26.0

File hashes

Hashes for lockout-0.1.5.tar.gz
Algorithm Hash digest
SHA256 ba6b9acb65bf4cc7151b95603a2f8c94bf56af41c7c91f5881a124ae5580e5c7
MD5 2b81adce8d138c4f997dbbd7fa01d596
BLAKE2b-256 5dc47cf469ed0a59a679b0539582319251988df6bc0c01a4bfffe5f0cb3b2ff7

See more details on using hashes here.

File details

Details for the file lockout-0.1.5-py2.py3-none-any.whl.

File metadata

  • Download URL: lockout-0.1.5-py2.py3-none-any.whl
  • Upload date:
  • Size: 17.5 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-requests/2.26.0

File hashes

Hashes for lockout-0.1.5-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 5046669358c6baf0e0648ecaae482584c0e442a4914a0570edeedee119352776
MD5 eee8b25680208082137885b07d8d4d21
BLAKE2b-256 ffebb75a6ae7a53d4149bc4f37c2d665b7fbfb68a001238a8dd62d3b4ced19c5

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page