Skip to main content

XAI Recommendation Toolkit

Project description

PnPXAI: Plug-and-Play Explainable AI

PnPXAI is a Python package that provides a modular and easy-to-use framework for explainable artificial intelligence (XAI). It allows users to apply various XAI methods to their own models and datasets, and visualize the results in an interactive and intuitive way.

Features

  • Detector: The detector module provides automatic detection of AI models implemented in PyTorch.
  • Evaluator: The evaluator module provides various ways to evaluate and compare the performance and explainability of AI models with the categorized evaluation properties of correctness (fidelity, area between perturbation curves), continuity (sensitivity), and compactness (complexity).
  • Explainers: The explainers module contains a collection of state-of-the-art XAI methods that can generate global or local explanations for any AI model, such as:
  • Recommender: The recommender module offers a recommender system that can suggest the most suitable XAI methods for a given model and dataset, based on the user’s preferences and goals.
  • Optimizer: The optimizer module is finds the best hyperparameter options, given a user-specified metric.

Installation

To install pnpxai from pip, run the following command:

pip install pnpxai

To install pnpxai from GitHub, run the following commands:

git clone git@github.com:OpenXAIProject/pnpxai.git
cd pnpxai
pip install -e .

Getting Started

This guide explains how to automatically explain your own models and datasets using the provided Python script. The complete code can be found here.

  1. Setup: The setup involves setting a random seed for reproducibility and defining the device for computation (CPU or GPU).

    import torch
    from pnpxai.utils import set_seed
    
    # Set the seed for reproducibility
    set_seed(seed=0)
    
    # Determine the device based on the availability
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    
  2. Create Experiments: An experiment is an instance for explaining a specific model and dataset. Before creating an experiment, define the model and dataset to be explained.

    Automatic explainer selection: The AutoExplanationForImageClassification method automatically selects the most applicable explainers and metrics based on the model architecture using pnpxai.XaiRecommender.

    import torch
    from torch.utils.data import DataLoader
    
    from pnpxai import AutoExplanationForImageClassification
    
    # Bring your model
    model = ...
    
    # Prepare your data
    dataset = ...
    loader = DataLoader(dataset, batch_size=...)
    def input_extractor(x):
        ...
    def target_extractor(x):
        ...
    
    # Auto-explanation
    experiment = AutoExplanationForImageClassification(
        model,
        loader,
        input_extractor=input_extractor,
        label_extractor=label_extractor,
        target_extractor=target_extractor,
        target_labels=False,
    )
    optimized = experiment.optimize(
        data_ids=range(16),
        explainer_id=2,
        metric_id=1,
        direction='maximize', # less is better
        sampler='tpe', # Literal['tpe','random']
        n_trials=50, # by default, 50 for sampler in ['random', 'tpe'], None for ['grid']
        seed=42, # seed for sampler: by default, None
    )
    

    Manual explainer selection: Alternatively, you can manually specify the desired explanation method and evaluation metric using Experiment.

    from pnpxai.core.modality import ImageModality
    from pnpxai.explainers import LRPEpsilonPlus
    from pnpxai.evaluator.metrics import MuFidelity
    
    explainer = LRPEpsilonPlus(model)
    metric = MuFidelity(model, explainer)
    modality = ImageModality()
    
    experiment = Experiment(
        model,
        loader,
        modality,
        explainers=[explainer],
        metrics=[metric],
        input_extractor=input_extractor,
        label_extractor=label_extractor,
        target_extractor=target_extractor,
    )
    

Tutorials

Use Cases

Medical Domain Explainability

  • Counterfactual Explanation (LEAR (Learn-Explain-Reinforce)) for Alzheimer’s Disease Diagnosis, a joint work with Research Task 2 (PI Bohyung Han, Seoul National University) [Reference]

  • Attribution-based Explanation for Dysarthria Diagnosis, a joint work with Research Task 3 (PI Myoung-Wan Koo, Sogang University)

LLM Trsutworthiness

Documentation

The Documentation contains the API reference for all of the functionality of the framework. Primarily, high-level modules of the framework include:

  • Detector
  • Explainer
  • Recommender
  • Evaluator
  • Optimizer

Acknowledgements

This research was initiated by KAIST XAI Center and conducted in collaboration with multiple institutions, including Seoul National University, Korea University, Sogang University, and ETRI. We are grateful for the grant from the Institute of Information & communications Technology Planning & Evaluation (IITP) (No.RS-2022-II220984).

Citation

If you find this repository useful in your research, please consider citing our paper:

@article{kim2025pnpxai,
  title={PnPXAI: A Universal XAI Framework Providing Automatic Explanations Across Diverse Modalities and Models},
  author=author={Kim, Seongun and Kim, Sol A and Kim, Geonhyeong and Menadjiev, Enver and Lee, Chanwoo and Chung, Seongwook and Kim, Nari and Choi, Jaesik},
  journal={arXiv preprint arXiv:2505.10515},
  year={2025}
}

License

PnP XAI is released under Apache license 2.0. See LICENSE for additional details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pnpxai-0.1.4.tar.gz (97.1 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

pnpxai-0.1.4-py3-none-any.whl (125.1 kB view details)

Uploaded Python 3

File details

Details for the file pnpxai-0.1.4.tar.gz.

File metadata

  • Download URL: pnpxai-0.1.4.tar.gz
  • Upload date:
  • Size: 97.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.21

File hashes

Hashes for pnpxai-0.1.4.tar.gz
Algorithm Hash digest
SHA256 40a0b3b2f6523727d32152481d8ad70e2d81778e4aa215e72e895f16f6c3bd90
MD5 93c77b2100076476e9117afb8844308e
BLAKE2b-256 9ac59f3a53abe42e5338601f2250cf342229e82aff6b528f90fcd1f9d45dff66

See more details on using hashes here.

File details

Details for the file pnpxai-0.1.4-py3-none-any.whl.

File metadata

  • Download URL: pnpxai-0.1.4-py3-none-any.whl
  • Upload date:
  • Size: 125.1 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.9.21

File hashes

Hashes for pnpxai-0.1.4-py3-none-any.whl
Algorithm Hash digest
SHA256 9a3b6592f1ad1fb296941e7e3032411589b366edddcc99603b92a28e0d175efb
MD5 3f4c16644a509701c8a779eb8f7e09d4
BLAKE2b-256 7197a15db2b3e2608d2a1a5eb4b03f76a0e7c71a17e7359d56b5270ae76fe406

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page