Visual explanations for humans

# VisuaLIME

VisuaLIME is an implementation of LIME [1] focused on producing visual local explanations for image classifiers created as part of the XAI Demonstrator project.

In contrast to the reference implementation, VisuaLIME exclusively supports image classification and allows its users control full control over the properties of the generated explanations. It was written to produce stable, reliable, and expressive explanations at scale.

## Getting Started

VisuaLIME provides two functions that package its building blocks into a reference explanation pipeline:

from visualime.explain import explain_classification, render_explanation

image = ...  # a numpy array of shape (width, height, 3) representing an RGB image

def predict_fn(images):
# a function that takes a numpy array of shape (num_of_samples, width, height, 3)
# representing num_of_samples RGB images and returns a numpy array of
# shape (num_of_samples, num_of_classes) where each entry corresponds to the
# classifiers output for the respective image
predictions = ...
return predictions

explanation = render_explanation(
image,
segment_weights,
positive="green",
negative="red",
coverage=0.2,
)


For a full example, see the example notebook on GitHub.

• Verify that the algorithm matches the original LIME and document differences
• Build out a comprehensive unit test suite
• Add performance benchmarks and optimize implementation of the algorithm
• Include utilities to assess and tune explanations for stability and truthfulness
• Provide a best practice example of implementing a fully configurable LIME explainer

## References

[1] Ribeiro et al.: "Why Should I Trust You?": Explaining the Predictions of Any Classifier (arXiv:1602.04938, 2016)

## Project details

Uploaded source
Uploaded py3