Model interpretability library for PyTorch with a focus on time series.
Project description
Time Interpret (tint)
This library expands the captum library with a specific focus on time series. For more details, please see the documentation and our paper.
Install
Time Interpret can be installed with pip:
pip install time_interpret
Please see the documentation for alternative installation modes.
Quick-start
First, let's load an Arma dataset:
from tint.datasets import Arma
arma = Arma()
arma.download() # This method generates the dataset
We then load some test data from the dataset and the corresponding true saliency:
inputs = arma.preprocess()["x"][0]
true_saliency = arma.true_saliency(dim=1)[0]
We can now load an attribution method and use it to compute the saliency:
from tint.attr import TemporalIntegratedGradients
explainer = TemporalIntegratedGradients(arma.get_white_box)
baselines = inputs * 0
attr = explainer.attribute(
inputs,
baselines=baselines,
additional_forward_args=(true_saliency,),
temporal_additional_forward_args=(True,),
).abs()
Finally, we evaluate our method using the true saliency and a white box metric:
from tint.metrics.white_box import aup
print(f"{aup(attr, true_saliency):.4}")
Methods
- AugmentedOcclusion
- BayesKernelShap
- BayesLime
- Discretized Integrated Gradients
- DynaMask
- ExtremalMask
- Fit
- LofKernelShap
- LofLime
- Non-linearities Tunnel
- Occlusion
- Retain
- SequentialIntegratedGradients
- TemporalAugmentedOcclusion
- TemporalOcclusion
- TemporalIntegratedGradients
- TimeForwardTunnel
This package also provides several datasets, models and metrics. Please refer to the documentation for more details.
Paper: Learning Perturbations to Explain Time Series Predictions
The experiments for the paper: Learning Perturbations to Explain Time Series Predictions can be found on these folders:
Paper: Sequential Integrated Gradients: a simple but effective method for explaining language models
The experiments for the paper: Sequential Integrated Gradients: a simple but effective method for explaining language models can be found on the NLP section of the experiments.
TSInterpret
More methods to interpret predictions of time series classifiers have been grouped into TSInterpret, another library with a specific focus on time series. We developed Time Interpret concurrently, not being aware of this library at the time.
Acknowledgment
- Jonathan Crabbe for the DynaMask implementation.
- Sana Tonekaboni for the fit implementation.
- INK Lab for the discretized integrated gradients' implementation.
- Dylan Slack for the BayesLime and BayesShap implementations.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file time_interpret-0.3.0.tar.gz
.
File metadata
- Download URL: time_interpret-0.3.0.tar.gz
- Upload date:
- Size: 1.5 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6ff338d7f2400d3302b62a0371e81a88ca7f651e2fb1619327b243059ab7bf72 |
|
MD5 | fe98a0ce64bad24100c381d6dc5d85de |
|
BLAKE2b-256 | af787b7a09aaea24c068ef9c9590515d178b65809492d06641d11823406e1983 |
File details
Details for the file time_interpret-0.3.0-py3-none-any.whl
.
File metadata
- Download URL: time_interpret-0.3.0-py3-none-any.whl
- Upload date:
- Size: 1.5 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.11.3
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a057001510936e43e31dad4e11c08e6f6d6b22ad687b40186f85df98e3908465 |
|
MD5 | 6326d0a3bd51c96ab94f667336de35cf |
|
BLAKE2b-256 | 266fa0684d12c2416eba1ddf140adced066c229850fa04d0fafe8267c0b23f70 |