SDK API to explain models, generate counterfactual examples, analyze causal effects and analyze errors in Machine Learning models.
Project description
Trustworthy AI Model Analysis SDK for Python
This package has been tested with Python 3.7, 3.8, 3.9 and 3.10
The Trustworthy AI Model Analysis SDK enables users to analyze their machine learning models in one API. Users will be able to analyze errors, explain the most important features, compute counterfactuals and run causal analysis using a single API.
Highlights of the package include:
explainer.add()explains the modelcounterfactuals.add()computes counterfactualserror_analysis.add()runs error analysiscausal.add()runs causal analysis
Supported scenarios, models and datasets
trustworthyai supports computation of Trustworthy AI insights for scikit-learn models that are ttained on pandas.DataFrame. The trustworthyai accept both models and pipelines as input as long as the model or pipeline implements a predict or predict_proba function that conforms to the scikit-learn convention. If not compatible, you can wrap your model's prediction function into a wrapper class that transforms the output into the format that is supported (predict or predict_proba of scikit-learn), and pass that wrapper class to modules in trustworthyai.
Currently, we support datasets having numerical and categorical features. The following table provides the scenarios supported for each of the four trustworthy AI insights:-
| RAI insight | Binary classification | Multi-class classification | Multilabel classification | Regression | Timeseries forecasting | Categorical features | Text features | Image Features | Recommender Systems | Reinforcement Learning |
|---|---|---|---|---|---|---|---|---|---|---|
| Explainability | Yes | Yes | No | Yes | No | Yes | No | No | No | No |
| Error Analysis | Yes | Yes | No | Yes | No | Yes | No | No | No | No |
| Causal Analysis | Yes | No | No | Yes | No | Yes (max 5 features due to expensiveness) | No | No | No | No |
| Counterfactual | Yes | Yes | No | Yes | No | Yes | No | No | No | No |
The source code can be found here: https://github.com/affectlog/trustworthy-ai-toolbox/tree/main/trustworthyai
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file trustworthyai-0.35.0.tar.gz.
File metadata
- Download URL: trustworthyai-0.35.0.tar.gz
- Upload date:
- Size: 112.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b3679b7f9d352f064bf2dd9d4b5bb6d01b605baf51d1abd0b097142c30d921ea
|
|
| MD5 |
3d6844770d5a5c61e0b360df8a53d18f
|
|
| BLAKE2b-256 |
2f26ec15edf5e08d2a469d9ee296e5cef5cff71c2d71236bb8013ab1ae42f595
|
File details
Details for the file trustworthyai-0.35.0-py3-none-any.whl.
File metadata
- Download URL: trustworthyai-0.35.0-py3-none-any.whl
- Upload date:
- Size: 153.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.11.10
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
add9278cb590e064520b52f706627c4953f588c33de48fecd7a07fb1a3785524
|
|
| MD5 |
a8c20f3bec41062151b9474b116f73da
|
|
| BLAKE2b-256 |
c03b91a7dc938b89b3f9c097b1f0ddcf07840ad91a7b7fbd7712b2bde2f1e5e2
|