Skip to main content

Interactive visualizations to assess fairness, explain models, and analyze errors in Machine Learning

Project description

Responsible AI Widgets Python Build CD MIT license PyPI raiwidgets PyPI rai_core_flask npm fairness npm interpret npm mlchartlib npm core-ui npm dataset-explorer npm causality npm counterfactuals

Responsible-AI-Widgets

Responsible AI is an approach to assessing, developing, and deploying AI systems in a safe, trustworthy, and ethical manner, and take responsible decisions and actions.

Responsible-AI-Widgets provides a collection of model and data exploration and assessment user interfaces that enable a better understanding of AI systems. These interfaces empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.

Introducing Responsible AI Toolbox

The Responsible AI Toolbox is an open-source framework for helping data scientists and machine learning developers build machine learning powered products that are responsible and reliable. The toolkit supports the following activities:

  • Model Assessment, which involves determining how and why AI systems behave the way they do, understanding and diagnosing their issues, and using that knowledge to take targeted steps to improve their performance. Such steps can be encapsulated in the following workflow:

Model Assessment

  • Decision-making, which involves explorations such as estimating how a real-world outcome changes in the presence of an intervention, or “interrogating” a model to determine what feature perturbations of a particular datapoint would change the output of a machine learning model.

In order to achieve these capabilities, the toolbox integrates together ideas and technologies from several open-source toolkits in the areas of

  • Error Analysis powered by Error Analysis, which identifies cohorts of data with higher error rate than the overall benchmark. These discrepancies might occur when the system or model underperforms for specific demographic groups or infrequently observed input conditions in the training data.

  • Model Interpretability powered by InterpretML, which explains blackbox models, helping users understand their model's global behavior, or the reasons behind individual predictions.

  • Counterfactual Example Analysis powered by InterpretML DiCE, which shows feature-perturbed versions of the same datapoint who would have received a different prediction outcome, e.g., Taylor's loan has been rejected by the model. But they would have received the loan if their income was higher by $10,000.

  • Causal Analysis powered by EconML, which focuses on answering What If-style questions to apply data-driven decision-making – how would revenue be affected if a corporation pursues a new pricing strategy? Would a new medication improve a patient’s condition, all else equal?

responsible-ai-toolbox

Responsible AI Toolbox is designed to achieve the following goals:

  • To help further accelerate engineering processes in machine learning by enabling practitioners to design customizable workflows and tailor Responsible AI dashboards that best fit with their model assessment and data-driven decision making scenarios.
  • To help model developers create end to end and fluid debugging experiences and navigate seamlessly through error identification and diagnosis by using interactive visualizations that identify errors, inspect the data, generate global and local explanations models, and potentially inspect problematic examples.
  • To help business stakeholders explore causal relationships in the data and take informed decisions in the real world.

This repository contains the Jupyter notebooks with examples to showcase how to use this widget. Get started here.

Useful Links

Individual Dashboards

Besides the customizable and modular Responsible AI Toolbox, Responsible-AI-Widgets is hosts three individual dashboards that are specifically focused on error analysis, interpretability, and fairness assessment. Learn more: Error Analysis dashboard, Fairness dashboard, and Explanation dashboard.

Supported Models

This Responsible AI Toolbox API supports models that are trained on datasets in Python numpy.array, pandas.DataFrame, iml.datatypes.DenseData, or scipy.sparse.csr_matrix format.

The explanation functions of Interpret-Community accept both models and pipelines as input as long as the model or pipeline implements a predict or predict_proba function that conforms to the Scikit convention. If not compatible, you can wrap your model's prediction function into a wrapper function that transforms the output into the format that is supported (predict or predict_proba of Scikit), and pass that wrapper function to your selected interpretability techniques.

If a pipeline script is provided, the explanation function assumes that the running pipeline script returns a prediction. The repository also supports models trained via PyTorch, TensorFlow, and Keras deep learning frameworks.

Other Use Cases

Tools within the Responsible AI Toolbox can also be used with AI models offered as APIs by providers such as Azure Cognitive Services. To see example use cases, see the folders below:

Maintainers

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

raiwidgets-0.15.1.tar.gz (1.9 MB view hashes)

Uploaded Source

Built Distribution

raiwidgets-0.15.1-py3-none-any.whl (1.9 MB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page