Skip to main content

A Human-Centered Evaluation Framework for Explainability Methods

Project description

🔮 What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods

Julien Colin*, Thomas Fel*, Rémi Cadène, Thomas Serre

Read the official paper »

Documentation · Github . Tutorials (coming soon)

Paper summary

A multitude of explainability methods and associated fidelity performance metrics have been proposed to help better understand how modern AI systems make decisions. However, much of the current work has remained theoretical – without much consideration for the human end-user. In particular, it is not yet known (1) how useful current explainability methods are in practice for more real-world scenarios and (2) how well associated performance metrics accurately predict how much knowledge individual explanations contribute to a human end-user trying to understand the inner-workings of the system.

To fill this gap, we conducted psychophysics experiments at scale ($n = 1,150$) to evaluate the usefulness of attribution methods in three real-world scenarios: identifying bias in an AI system, characterizing the visual strategy it uses for tasks that are too difficult for an untrained non-expert human observer as well as understanding its failure cases.

Our results demonstrate that the degree to which individual attribution methods help human participants better understand an AI system varied widely across these scenarios. This suggests a critical need for the field to move past quantitative improvements of current attribution methods towards the development of complementary approaches that provide qualitatively different sources of information to human end-users.

Human-centered Framework

In this work, we propose to measure the usefulness of attribution methods, i.e., do they help users to understand how a model works beyond simply observing how it classifies images.

More precisely, we evaluate how much attribution maps help training users at getting better at predicting a models’ decisions on unseen images. The utility score are computed from the relative improvement of users in accuracy with vs. without the attribution maps in training.

3 Datasets, 3 Use cases

We evaluate the usefulness of representative attribution methods on 3 important use cases for eXplainable AI in vision:

  • (1) Bias detection
  • (2) Identify novel strategies discovered by an expert AI system
  • (3) Understand failure cases of an AI system

Results

We find mixed results: current attribution methods are helpful in simple use cases but none of the methods were helpful in diagnosing more challenging failure cases of the system. This result highlights a fundamental challenge for XAI, that, we argue, cannot be overcome by attribution methods alone, but that will require qualitatively different sources of information for human end-users.

Going beyond attribution method ?

We explored several possibilities for why attribution methods are not as useful in more complex scenarios. Making attribution methods (1) more faithful or (2) less visually complex does not seem like a promising avenue for future research. We argue instead for methods that will contribute novel “what” information to the user beyond the “where” information contributed by attribution methods.

🗞️ Citation

If you use or build on our work as part of your workflow in a scientific publication, please consider citing the official paper:

@article{colin2022metapredictor,
  title={What I Cannot Predict, I Do Not Understand: A Human-Centered Evaluation Framework for Explainability Methods},
  author={Colin, Julien and Fel, Thomas and Cadene, Remi and Serre, Thomas},
  journal={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2022}
}

Tutorials

Coming soon: a tutorial to evaluate your own explainability method.

📝 License

The package is released under MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Metapredictor-0.0.0.tar.gz (5.1 kB view details)

Uploaded Source

Built Distribution

Metapredictor-0.0.0-py3-none-any.whl (5.3 kB view details)

Uploaded Python 3

File details

Details for the file Metapredictor-0.0.0.tar.gz.

File metadata

  • Download URL: Metapredictor-0.0.0.tar.gz
  • Upload date:
  • Size: 5.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for Metapredictor-0.0.0.tar.gz
Algorithm Hash digest
SHA256 50d47970fd9ccdd461e616e613596e260d8ad22a59bb4247f6df0ac6940f1a19
MD5 bf8a265fafb04aef93ab06b9f125d52a
BLAKE2b-256 ae3a50b4250d322e96056dc2e11946ef3c385122aeab34717379760207dd72b1

See more details on using hashes here.

File details

Details for the file Metapredictor-0.0.0-py3-none-any.whl.

File metadata

File hashes

Hashes for Metapredictor-0.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 4f216031d6633270bd0d0be88751a9059721a1e4166d034a6fc53b6d05e2514c
MD5 241e84edb8380001f87c71a8e4bc0c2c
BLAKE2b-256 d337c5b91fd19098908d663a0aeb90875916b286edef330020bd8109e9463831

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page