Skip to main content

An open-source toolkit to implement responsible AI workflows

Project description

VerifyML

VerifyML is an opinionated, open-source toolkit and workflow to help companies implement human-centric AI practices. It is built on 3 principles:

  • A git and code first approach to model development and maintenance.
  • Automatic generation of model cards - machine learning documents that provide context and transparency into a model's development and performance.
  • Model tests for validating performance of models across protected groups of interest, during development and in production.

Components

VerifyML Dataflow

At the core of the VerifyML workflow is a model card that captures 6 aspects of a model:

  • Model details
  • Considerations
  • Model / data parameters
  • Quantitative analysis
  • Explainability analysis
  • Fairness analysis

It is adapted from Google's Model Card Toolkit and expanded to include broader considerations such as fairness and explainability.

A model card editor, provides a web-based interface to gather input and align stakeholders across product, data science, compliance.

Our Python toolkit supports data science workflows, and allows a custom model to be built and logged within the model card framework. The package also contains perfomance and fairness tests for model diagnostics, fairness and reliability checks.

Being a standard protobuf format, the model card can be translated to various outputs including a model report, trade-off comparison and even tests results summary.

Installation

The Model Card Toolkit is hosted on PyPI, and can be installed with pip install verifyml.

Getting Started

Generate a model card

VerifyML Model Card Editor

The VerifyML card creator provides an easy way for teams to create and edit model cards in a WYSIWYG editor. Use it to bootstrap your model card or edit text records through a web browser. It is a client side application and no data gets stored on a server.

Alternatively, generate a model card with the python toolkit:

import verifyml.model_card_toolkit as mctlib

# Initialize the Model Card Toolkit with a path to store generate assets
mct = mctlib.ModelCardToolkit(output_dir="model_card_output", file_name="breast_cancer_diagnostic_model_card")
model_card = mct.scaffold_assets()

Populate the model card with details

# You can add free text fields
model_card.model_details.name = 'Breast Cancer Wisconsin (Diagnostic) Dataset'

# Or use helper classes
model_card.model_parameters.data.append(mctlib.Dataset())
model_card.model_parameters.data[0].graphics.description = (
  f'{len(X_train)} rows with {len(X_train.columns)} features')
model_card.model_parameters.data[0].graphics.collection = [
    mctlib.Graphic(image=mean_radius_train),
    mctlib.Graphic(image=mean_texture_train)
]

Save and export to html

html = mct.export_format(output_file="example.html")
display.display(display.HTML(html))

Model Tests

Model tests provides an out of the box way to conduct checks and analysis on performance, explainability and fairness. The tests included in VerifyML are atomic functions that can be imported and run without a model card. However, by using it with a model card, it provides a way to standardize objectives and check for intended or unintended model biases. It also automates documentation and renders the insights to a business friendly report.

Currently, VerifyML provides 5 classes of tests:

  1. Subgroup Disparity Test - For a given metric, assert that the difference between the best and worst performing group is less than a specified threshold
  2. Min/Max Metric Threshold Test - For a given metric, assert that all groups should be below / above a specified threshold
  3. Perturbation Test - Assert that a given metric does not change significantly after perturbing on a specified input variable
  4. Feature Importance Test - Assert that certain specified variables are not included as the top n most important features
  5. Data Shift Test - Assert that the distributions of specified attributes are similar across two given datasets of interest

The detailed model tests readme contains more information on the tests.

You can also easily create your own model tests by inheriting from the base model test class. See DEVELOPMENT for more details.

Example usage

from verifyml.model_tests.FEAT import SubgroupDisparity

# Ratio of false positive rates between age subgroups should not be more than 1.5
sgd_test = SubgroupDisparity(metric='fpr', method='ratio', threshold=1.5)
sgd_test.run(output) # test data with prediction results
sgd_test.plot(alpha=0.05)

Adding the test to the model card

import verifyml.model_card_toolkit as mctlib

mc_sgd_test = mctlib.Test()
mc_sgd_test.read_model_test(sgd_test)
model_card.fairness_analysis.fairness_reports[0].tests = [mc_smt_test]

Schema

Model cards are stored as a protobuf format. The reference model card protobuf schema can be found in the proto directory. A translated copy in json schema format is also made available for convenience in the schema folder

Templates

Model cards can be rendered into various reports through the use of templates. The template folder contains two html templates - a default model report and a compare template, and a default markdown model report.

Contributions and Development

Contributions are always welcome - check out CONTRIBUTING

The package and it's functionalities can be easily extended to meet the needs of a team. Check out DEVELOPMENT for more info.

Prior Art

The model card in VerifyML is adapted from Google's Model Card Toolkit. It is backward compatible with v0.0.2 and expands on it by adding sections on explainability and fairness. You can specify the desired rendering template by specifying the template_path argument when calling the mct.export_format function. For example:

mct.export_format(output_file="example.md", template_path="path_to_my_template")

View the templates' README for more information on creating your own jinja templates.

References

[1] https://arxiv.org/abs/1810.03993

License

VerifyML is licensed under the Apache License, Version 2.0. See LICENSE for the full license text.

Generating Docs

Docs are generated using pydoc-markdown, and our configuration is specified in pydoc-markdown.yml. The package reads the yml file, then converts the referenced READMEs and code files into corresponding mkdocs markdown files, together with a mkdocs.yml config file. These converted files can be found in a build/docs directory, which will appear after the commands below are run.

Preview

To preview the docs locally, run

./docs.sh serve

This creates doc files in build/docs/, then serves them at localhost:8000.

Build

To build the HTML files, run

./docs.sh build

This creates doc files in build/docs/, then creates their HTML equivalents in build/html/.

Details

To render Jupyter Notebooks in the docs, we use the mkdocs-jupyter plugin, and reference the notebooks in pydoc-markdown.yml (e.g. source: example.ipynb in one of the entries).

However, because pydoc-markdown converts everything to Markdown files by default, only the notebook text would show up by default. Thus, some intermediate steps (/ hacks) are required for the notebook to render correctly:

  1. Build the docs, converting the notebook text into a Markdown file (e.g. build/docs/example.md)
  2. Rename the built file's extension from Markdown back into a notebook format (e.g. mv example.md example.ipynb in bash)
  3. Edit the built mkdocs.yml file such that the notebook's entry points to the renamed file in step 2 (this is done by convert_md_to_ipynb.py)

./docs.sh handles these steps.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

verifyml-0.0.6.tar.gz (109.5 kB view details)

Uploaded Source

Built Distribution

verifyml-0.0.6-py3-none-any.whl (136.8 kB view details)

Uploaded Python 3

File details

Details for the file verifyml-0.0.6.tar.gz.

File metadata

  • Download URL: verifyml-0.0.6.tar.gz
  • Upload date:
  • Size: 109.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.0 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.5

File hashes

Hashes for verifyml-0.0.6.tar.gz
Algorithm Hash digest
SHA256 dc65463138b5b0473beb029a42a1dca9502c170c359f00ade334157195af5820
MD5 3191c97b2545aadfa3ec78e0e7ce8afd
BLAKE2b-256 867248abd54909c4a319daf16397ba5546e2172d839ca6f6e4ecede46ee1c67e

See more details on using hashes here.

File details

Details for the file verifyml-0.0.6-py3-none-any.whl.

File metadata

  • Download URL: verifyml-0.0.6-py3-none-any.whl
  • Upload date:
  • Size: 136.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.0 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.3 CPython/3.8.5

File hashes

Hashes for verifyml-0.0.6-py3-none-any.whl
Algorithm Hash digest
SHA256 e5292227d2c4b3d5a44f7bc63e4bb6ed9895b0c8621dfb6825a2c6a69221960e
MD5 6f5d9e3554d0c991ff1e1297db618488
BLAKE2b-256 51b7b92794c2ff43fcf546bbbe8baf236aa28c96cbbbbdb21eb8384e2bd7326c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page