Skip to main content

PyTorch Explain: Logic Explained Networks in Python.

Project description

https://raw.githubusercontent.com/pietrobarbiero/pytorch_explain/master/doc/_static/img/pye_logo_text_dark.svg

Travis (.org) Codecov

Read the Docs (version) Requires.io

PyPI license PyPI

https://zenodo.org/badge/356630474.svg

PyTorch, Explain! is an extension library for PyTorch to develop explainable deep learning models called Logic Explained Networks (LENs).

It consists of various methods for explainability from a variety of published papers, including the APIs required to get first-order logic explanations from deep neural networks.

Quick start

You can install torch_explain along with all its dependencies from PyPI:

pip install -r requirements.txt torch-explain

Example

For this simple experiment, let’s solve the XOR problem (augmented with 100 dummy features):

import torch
import torch_explain as te

x0 = torch.zeros((4, 100))
x_train = torch.tensor([
    [0, 0],
    [0, 1],
    [1, 0],
    [1, 1],
], dtype=torch.float)
x_train = torch.cat([x_train, x0], dim=1)
y_train = torch.tensor([0, 1, 1, 0], dtype=torch.long)

We can instantiate a simple feed-forward neural network with 3 layers using the EntropyLayer as the first one:

layers = [
    te.nn.EntropyLinear(x_train.shape[1], 10, n_classes=2),
    torch.nn.LeakyReLU(),
    torch.nn.Linear(10, 4),
    torch.nn.LeakyReLU(),
    torch.nn.Linear(4, 1),
]
model = torch.nn.Sequential(*layers)

We can now train the network by optimizing the cross entropy loss and the entropy_logic_loss loss function incorporating the human prior towards simple explanations:

optimizer = torch.optim.AdamW(model.parameters(), lr=0.01)
loss_form = torch.nn.CrossEntropyLoss()
model.train()
for epoch in range(1001):
    optimizer.zero_grad()
    y_pred = model(x_train).squeeze(-1)
    loss = loss_form(y_pred, y_train) + 0.00001 * te.nn.functional.entropy_logic_loss(model)
    loss.backward()
    optimizer.step()

Once trained we can extract first-order logic formulas describing how the network composed the input features to obtain the predictions:

from torch_explain.logic.nn import entropy
from torch.nn.functional import one_hot

y1h = one_hot(y_train)
explanation, _ = entropy.explain_class(model, x_train, y1h, x_train, y1h, target_class=1)

Explanations will be logic formulas in disjunctive normal form. In this case, the explanation will be y=1 IFF (f1 AND ~f2) OR (f2 AND ~f1) corresponding to y=1 IFF f1 XOR f2.

The quality of the logic explanation can quantitatively assessed in terms of classification accuracy and rule complexity as follows:

from torch_explain.logic.metrics import test_explanation, complexity

accuracy, preds = test_explanation(explanation, x_train, y1h, target_class=1)
explanation_complexity = complexity(explanation)

In this case the accuracy is 100% and the complexity is 4.

Experiments

Training

To train the model(s) in the paper, run the scripts and notebooks inside the folder experiments.

Results

Results on test set and logic formulas will be saved in the folder experiments/results.

Data

The original datasets can be downloaded from the links provided in the supplementary material of the paper.

Theory

Theoretical foundations can be found in the following papers.

Entropy-based LENs:

@article{barbiero2021entropy,
  title={Entropy-based Logic Explanations of Neural Networks},
  author={Barbiero, Pietro and Ciravegna, Gabriele and Giannini, Francesco and Li{\'o}, Pietro and Gori, Marco and Melacci, Stefano},
  journal={arXiv preprint arXiv:2106.06804},
  year={2021}
}

Psi network (“learning of constraints”):

@inproceedings{ciravegna2020constraint,
  title={A Constraint-Based Approach to Learning and Explanation.},
  author={Ciravegna, Gabriele and Giannini, Francesco and Melacci, Stefano and Maggini, Marco and Gori, Marco},
  booktitle={AAAI},
  pages={3658--3665},
  year={2020}
}

Learning with constraints:

@inproceedings{marra2019lyrics,
  title={LYRICS: A General Interface Layer to Integrate Logic Inference and Deep Learning},
  author={Marra, Giuseppe and Giannini, Francesco and Diligenti, Michelangelo and Gori, Marco},
  booktitle={Joint European Conference on Machine Learning and Knowledge Discovery in Databases},
  pages={283--298},
  year={2019},
  organization={Springer}
}

Constraints theory in machine learning:

@book{gori2017machine,
  title={Machine Learning: A constraint-based approach},
  author={Gori, Marco},
  year={2017},
  publisher={Morgan Kaufmann}
}

Authors

  • Pietro Barbiero, University of Cambridge, UK.

  • Francesco Giannini, University of Florence, IT.

  • Gabriele Ciravegna, University of Florence, IT.

  • Dobrik Georgiev, University of Cambridge, UK.

Licence

Copyright 2020 Pietro Barbiero, Francesco Giannini, Gabriele Ciravegna, and Dobrik Georgiev.

Licensed under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at: http://www.apache.org/licenses/LICENSE-2.0.

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and limitations under the License.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torch_explain-0.6.2.tar.gz (19.2 kB view hashes)

Uploaded Source

Built Distribution

torch_explain-0.6.2-py3-none-any.whl (23.1 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page