Deep logic: Interpretable neural networks in Python.
Project description
Deep Logic is a python package providing a set of utilities to build deep learning models that are explainable by design.
This library provides APIs to:
prune a standard model to get deep logic model
extract logical formulas explaining network predictions
validate the input data, the model architecture, and the pruning strategy
Quick start
You can install Deep Logic along with all its dependencies from PyPI:
$ pip install -r requirements.txt deep-logic
Example
First of all we need to import some useful libraries:
import torch
import numpy as np
import deep_logic as dl
In most cases it is recommended to fix the random seed for reproducibility:
torch.manual_seed(0)
np.random.seed(0)
For this simple experiment, let’s set up a simple toy problem as the XOR problem:
x = torch.tensor([
[0, 0],
[0, 1],
[1, 0],
[1, 1], ], dtype=torch.float)
y = torch.tensor([0, 1, 1, 0],
dtype=torch.float).unsqueeze(1)
We can instantiate a simple feed-forward neural network with 2 layers:
layers = [
torch.nn.Linear(2, 4),
torch.nn.Sigmoid(),
torch.nn.Linear(4, 1),
torch.nn.Sigmoid()
]
model = torch.nn.Sequential(*layers)
Before training the network, we should validate the input data and the network architecture. The requirements are the following:
all the input features should be in $[0,1]$;
all the activation functions should be sigmoids.
dl.validate_data(x)
dl.validate_network(model)
We can now train the network pruning weights with the lowest absolute values after 500 epochs:
optimizer = torch.optim.Adam(model.parameters(), lr=0.01)
model.train()
for epoch in range(1000):
# forward pass
optimizer.zero_grad()
y_pred = model(x)
# Compute Loss
loss = torch.nn.functional.binary_cross_entropy(y_pred, y)
# backward pass
loss.backward()
optimizer.step()
# compute accuracy
if epoch % 100 == 0:
y_pred_d = (y_pred > 0.5)
accuracy = (y_pred_d.eq(y).sum(dim=1) == y.size(1)).sum().item() / y.size(0)
print(f'Epoch {epoch}: train accuracy: {accuracy:.4f}')
# pruning
if epoch > 500:
model = dl.prune_equal_fanin(model, 2)
Once trained the fol package can be used to generate first-order logic explanations of the predictions:
# generate explanations
weights, biases = dl.collect_parameters(model)
f = dl.fol.generate_fol_explanations(weights, biases)[0]
print(f'Explanation: {f}')
For this problem the generated explanation is (f1 & ~f2) | (f2 & ~f1) which corresponds to f1 XOR f2.
Theory
Theoretical foundations can be found in the following papers.
Learning of constraints:
@inproceedings{ciravegna2020constraint, title={A Constraint-Based Approach to Learning and Explanation.}, author={Ciravegna, Gabriele and Giannini, Francesco and Melacci, Stefano and Maggini, Marco and Gori, Marco}, booktitle={AAAI}, pages={3658--3665}, year={2020} }
Learning with constraints:
@inproceedings{marra2019lyrics, title={LYRICS: A General Interface Layer to Integrate Logic Inference and Deep Learning}, author={Marra, Giuseppe and Giannini, Francesco and Diligenti, Michelangelo and Gori, Marco}, booktitle={Joint European Conference on Machine Learning and Knowledge Discovery in Databases}, pages={283--298}, year={2019}, organization={Springer} }
Constraints theory in machine learning:
@book{gori2017machine, title={Machine Learning: A constraint-based approach}, author={Gori, Marco}, year={2017}, publisher={Morgan Kaufmann} }
Licence
Copyright 2020 Pietro Barbiero.
Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0.
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and limitations under the License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for deep_logic-0.0.5-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8810e878736820261d0bc9705cad962953250cb523742a10faecc703b7333336 |
|
MD5 | f18d6a34958d7cb0d052c865b5b92b4d |
|
BLAKE2b-256 | 50a15bcbc9eaf7cc5902473b58dfb5695d4127fb33d467a818cdeac528d1e5d3 |