Skip to main content

XplainML is a comprehensive Python package designed for Explainable AI (XAI) and Responsible AI practices. It provides a suite of tools and algorithms to enhance the transparency, interpretability, and fairness of machine learning models.

Project description

XplainML: Empowering Transparent and Responsible AI

GitHub stars GitHub license

Welcome to XplainML – your ultimate toolkit for unlocking the power of transparent and responsible AI! Designed and developed by Deependra Verma, XplainML empowers data scientists, machine learning engineers, and AI practitioners to understand, interpret, and trust their models with ease.

Introduction

XplainML is an open-source Python package designed to provide transparent and responsible AI capabilities to users. With XplainML, you can easily interpret your AI models, detect and mitigate bias, ensure fairness, and promote ethical AI practices.

Features

Explainable AI (XAI) Made Easy

Unravel the mysteries of your AI models with intuitive explanations using state-of-the-art techniques like SHAP and LIME.

Responsible AI Integration

Detect and mitigate bias, ensure fairness, and promote ethical AI practices with built-in fairness metrics and bias mitigation algorithms.

Installation

To install XplainML, simply run:

pip install XplainML

Usage

Explainable AI (XAI)

SHAP Explanations

# Import the necessary libraries
import shap
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier

# Load the Iris dataset
iris = load_iris()
X, y = iris.data, iris.target

# Train a random forest classifier
model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X, y)

# Initialize the SHAP explainer
explainer = shap.Explainer(model)

# Generate SHAP explanations for a sample instance
shap_values = explainer(X[:1])

# Visualize the SHAP explanations
shap.plots.waterfall(shap_values[0])

LIME Explanations

# Import the necessary libraries
import lime
import lime.lime_tabular
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier

# Load the Iris dataset
iris = load_iris()
X, y = iris.data, iris.target

# Train a random forest classifier
model = RandomForestClassifier(n_estimators=100, random_state=42)
model.fit(X, y)

# Initialize the LIME explainer
explainer = lime.lime_tabular.LimeTabularExplainer(X, feature_names=iris.feature_names, class_names=iris.target_names)

# Generate LIME explanations for a sample instance
explanation = explainer.explain_instance(X[0], model.predict_proba)

# Visualize the LIME explanations
explanation.show_in_notebook()

Responsible AI Integration

Bias Detection

# Import the necessary libraries
from aif360.datasets import GermanDataset
from aif360.metrics import BinaryLabelDatasetMetric

# Load the German credit dataset
dataset = GermanDataset()
privileged_group = [{'sex': 1}]
unprivileged_group = [{'sex': 0}]

# Compute metrics for bias detection
metric = BinaryLabelDatasetMetric(dataset, unprivileged_group=unprivileged_group, privileged_group=privileged_group)
print("Mean Difference: ", metric.mean_difference())
print("Disparate Impact: ", metric.disparate_impact())

Bias Mitigation

# Import the necessary libraries
from aif360.algorithms.preprocessing import Reweighing

# Apply bias mitigation using Reweighing
biased_dataset = dataset.convert_to_dataframe()[0]
rw = Reweighing(unprivileged_groups=unprivileged_group, privileged_groups=privileged_group)
biased_dataset = rw.fit_transform(biased_dataset)

# Verify bias mitigation results
metric_biased = BinaryLabelDatasetMetric(biased_dataset, unprivileged_group, privileged_group)
print("Mean Difference after mitigation: ", metric_biased.mean_difference())
print("Disparate Impact after mitigation: ", metric_biased.disparate_impact())

Contributing

We welcome contributions from the community! Whether it's fixing bugs, adding new features, or improving documentation, your contributions help make XplainML better for everyone. Check out our Contributing Guidelines to get started.

License

XplainML is licensed under the MIT License. See the LICENSE file for details.

About the Author

Deependra Verma
Data Scientist
Email | LinkedIn | GitHub | Portfolio

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

XplainML-0.0.1.tar.gz (6.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

XplainML-0.0.1-py3-none-any.whl (5.5 kB view details)

Uploaded Python 3

File details

Details for the file XplainML-0.0.1.tar.gz.

File metadata

  • Download URL: XplainML-0.0.1.tar.gz
  • Upload date:
  • Size: 6.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.19

File hashes

Hashes for XplainML-0.0.1.tar.gz
Algorithm Hash digest
SHA256 778d5c08e775914f51c9c85bb4870d8d7990c5d8d461935a1168f0cf843ec986
MD5 da84775c93b8cff10c3633164ce68715
BLAKE2b-256 68f1846fa76a58db11c81e8e5f03d60cc3ea6c9ca4409fde96a6e4cce5ce3499

See more details on using hashes here.

File details

Details for the file XplainML-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: XplainML-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 5.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.9.19

File hashes

Hashes for XplainML-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 eb47aff99b5e3ac0ce196f1a97d2968982081db1f0539757b94ace8f13fc55ee
MD5 bd7a6b0c5d32b320bdfe97a925f4db68
BLAKE2b-256 33e636d66d7ef909a6ed24dc38c7c5d5a364983c6b0b8502e6422be55504d1e2

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page