A multi-package setup for AffectLog AI modules.
Project description
AL360° Trustworthy AI Toolbox
The AL360° Trustworthy AI Toolbox is an end-to-end suite of tools designed to assess, develop, and deploy AI systems in a safe, trustworthy, and ethical manner. With this toolbox, stakeholders of AI systems can better understand their models, make trustworthy data-driven decisions, and take corrective actions to ensure fairness, transparency, and accountability in AI development.
This toolbox integrates several open-source tools and libraries into a unified experience, enabling users to identify model errors, evaluate fairness, understand predictions, and optimize decision-making. It is aimed at both technical users such as data scientists and engineers and non-technical stakeholders such as business leaders and policy-makers.
Key Features
- Error Analysis: Identify cohorts where models underperform using Error Analysis.
- Fairness Assessment: Evaluate fairness across sensitive attributes with Fairlearn.
- Model Interpretability: Understand model predictions through InterpretML.
- Counterfactual Analysis: Explore counterfactual scenarios with DiCE.
- Causal Analysis: Conduct causal inference with EconML.
- Data Balance Diagnostics: Visualize and mitigate imbalances in datasets.
Repository Overview
Repository | Description |
---|---|
Affectlog360 | Central repository containing the AL360° Trustworthy AI tools including dashboards for model interpretability, fairness, error analysis, and data balance analysis. |
Affectlog360 Mitigations | Library for applying mitigation techniques to improve model fairness and performance. |
Trustworthy-AI-Tracker | JupyterLab extension for managing, tracking, and comparing machine learning experiments. |
Affectlog360 GenBit | Library to measure and mitigate gender bias in Natural Language Processing (NLP) models. |
Installation
You can install the AL360° Trustworthy AI Toolbox using the following command:
pip install al360_taiwidgets
Note: If you're running this in Jupyter, restart the kernel after installation to ensure the tools are loaded correctly.
AL360° Trustworthy AI Dashboard
The AL360° Trustworthy AI Dashboard is the core component of the toolbox. It provides an intuitive interface that allows users to explore model performance, errors, fairness, and feature importance, as well as generate actionable insights for decision-making. It integrates multiple tools into a cohesive flow that helps practitioners quickly diagnose issues and improve models.
Capabilities:
- Error Detection: Find and analyze cohorts where the model exhibits higher error rates.
- Fairness: Understand how the model impacts different demographic groups and mitigate disparities.
- Interpretability: Explain model predictions both globally and locally.
- Causal Analysis: Apply What-If scenarios to see how changes in features impact outcomes.
- Data Balance: Visualize data distributions and feature imbalances.
Dashboard Flows:
You can customize the AL360° dashboard to suit different use cases:
- Model Overview -> Error Analysis -> Data Explorer: Identify model errors and diagnose root causes in the data.
- Model Overview -> Fairness Assessment -> Data Explorer: Diagnose fairness issues by understanding feature distributions.
- Model Overview -> Error Analysis -> Counterfactuals Analysis: Use counterfactuals to understand what changes would alter individual predictions.
- Data Explorer -> Causal Inference: Explore causal relationships in the data to make data-driven decisions.
Explore different flows and usage examples in our detailed guides:
Use Cases
The tools within the AL360° Trustworthy AI Toolbox support various model types and AI use cases, including:
- Tabular Data Models: Used in structured datasets like financial records, medical datasets, etc.
- Text Models: NLP models for classification, question-answering, and more.
- Vision Models: Analyze computer vision models for object detection, classification, and more.
Check out the following notebooks for example use cases:
Supported Models and Frameworks
The AL360° Trustworthy AI Toolbox supports models trained on datasets in the following formats:
- Python:
numpy.ndarray
,pandas.DataFrame
,iml.datatypes.DenseData
,scipy.sparse.csr_matrix
. - Deep Learning: PyTorch, TensorFlow, Keras models.
- Scikit-learn Pipelines: Any model with a
predict
orpredict_proba
method compatible with Scikit-learn's API.
Jupyter Integration
The AL360° Trustworthy AI Toolbox provides seamless integration with Jupyter notebooks for managing, tracking, and comparing models. You can explore your model's fairness, interpretability, and data balance directly within JupyterLab.
Contributing
We welcome contributions to the AL360° Trustworthy AI Toolbox. Please read our CONTRIBUTING.md guide for more details on how to get involved.
License
This project is licensed under the MIT License.
Useful Links
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file affectlog360-0.1.1.tar.gz
.
File metadata
- Download URL: affectlog360-0.1.1.tar.gz
- Upload date:
- Size: 5.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 3c122d7ac7b970efad22042fdf98b5d6f1bab02045d8ac8c2ba604e777cde098 |
|
MD5 | b7992793de715fe6a49520cd6b0f5e33 |
|
BLAKE2b-256 | e401f7d621f3ce2d1357cf1a9ad30ae9cb7151af85f3fc29c3054017d4dac4b9 |
File details
Details for the file affectlog360-0.1.1-py3-none-any.whl
.
File metadata
- Download URL: affectlog360-0.1.1-py3-none-any.whl
- Upload date:
- Size: 4.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8d28f5a5c0fc02c6c923b23e555998ca81178b5ccf4ec87d0af0b661eab7298d |
|
MD5 | bb25618f9f84e9aaf14cf83117e13ab9 |
|
BLAKE2b-256 | aece7856f4b8b5638ad8c7874baad7efecceef2d28ba224c4812593f36cc410a |