Skip to main content

A toolkit for tools and techniques related to the privacy and compliance of AI models.

Project description

OpenSSF Best Practices

ai-privacy-toolkit


A toolkit for tools and techniques related to the privacy and compliance of AI models.

The anonymization module contains methods for anonymizing ML model training data, so that when a model is retrained on the anonymized data, the model itself will also be considered anonymous. This may help exempt the model from different obligations and restrictions set out in data protection regulations such as GDPR, CCPA, etc.

The minimization module contains methods to help adhere to the data minimization principle in GDPR for ML models. It enables to reduce the amount of personal data needed to perform predictions with a machine learning model, while still enabling the model to make accurate predictions. This is done by by removing or generalizing some of the input features.

The dataset assessment module implements a tool for privacy assessment of synthetic datasets that are to be used in AI model training.

Official ai-privacy-toolkit documentation: https://ai-privacy-toolkit.readthedocs.io/en/latest/

Installation: pip install ai-privacy-toolkit

For more information or help using or improving the toolkit, please contact Abigail Goldsteen at abigailt@il.ibm.com, or join our Slack channel: https://aip360.mybluemix.net/community.

We welcome new contributors! If you're interested, take a look at our contribution guidelines.

Related toolkits:

ai-minimization-toolkit - has been migrated into this toolkit.

differential-privacy-library: A general-purpose library for experimenting with, investigating and developing applications in, differential privacy.

adversarial-robustness-toolbox: A Python library for Machine Learning Security. Includes an attack module called inference that contains privacy attacks on ML models (membership inference, attribute inference, model inversion and database reconstruction) as well as a privacy metrics module that contains membership leakage metrics for ML models.

Citation

Abigail Goldsteen, Ola Saadi, Ron Shmelkin, Shlomit Shachor, Natalia Razinkov, "AI privacy toolkit", SoftwareX, Volume 22, 2023, 101352, ISSN 2352-7110, https://doi.org/10.1016/j.softx.2023.101352.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ai-privacy-toolkit-0.2.1.tar.gz (59.5 kB view details)

Uploaded Source

Built Distribution

ai_privacy_toolkit-0.2.1-py3-none-any.whl (57.4 kB view details)

Uploaded Python 3

File details

Details for the file ai-privacy-toolkit-0.2.1.tar.gz.

File metadata

  • Download URL: ai-privacy-toolkit-0.2.1.tar.gz
  • Upload date:
  • Size: 59.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.6

File hashes

Hashes for ai-privacy-toolkit-0.2.1.tar.gz
Algorithm Hash digest
SHA256 83d33baaeff01e155a509a5a2e5810e944efcda60781d4d6dd48180745c94c1c
MD5 5edd8cd2302c18afd6acfe7442827322
BLAKE2b-256 848a1d8ab9b83d5bc23d239b15181216fa5711158db9d56556ed35acf76fd07b

See more details on using hashes here.

File details

Details for the file ai_privacy_toolkit-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for ai_privacy_toolkit-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4ff61661149fa95af8531816f251f0bda02419d5e9809146201b1401b16a3826
MD5 4536d331cbb43c454b0a7d014fea6b8b
BLAKE2b-256 16863983af31404861c71c2f1c46abf1d925609400c6895fd9c906d3c23945b2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page