A toolkit for tools and techniques related to the privacy and compliance of AI models.
Reason this release was yanked:
Distribution package error, was fixed in version 0.0.2
Project description
ai-privacy-toolkit
A toolkit for tools and techniques related to the privacy and compliance of AI models.
The first release of this toolkit contains a single module called anonymization. This module contains methods for anonymizing ML model training data, so that when a model is retrained on the anonymized data, the model itself will also be considered anonymous. This may help exempt the model from different obligations and restrictions set out in data protection regulations such as GDPR, CCPA, etc.
Official ai-privacy-toolkit documentation: https://ai-privacy-toolkit.readthedocs.io/en/latest/
Related toolkits:
ai-minimization-toolkit: A toolkit for reducing the amount of personal data needed to perform predictions with a machine learning model
differential-privacy-library: A general-purpose library for experimenting with, investigating and developing applications in, differential privacy.
adversarial-robustness-toolbox: A Python library for Machine Learning Security.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for ai_privacy_toolkit-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | dceefd94baddc2137d20c9296a99d1cec6fb2541fd5a72c1e5297d73fbc46479 |
|
MD5 | f15f7cb6cc8a42e29135cadaadd329bc |
|
BLAKE2b-256 | f84b1a73488b9ad432772fe435c9f9fcd1913d194c59bc913a24238cdca64bf7 |