Tools for the statistical disclosure control of machine learning models
Project description
SACRO-ML
A collection of tools and resources for managing the statistical disclosure control of trained machine learning models. For a brief introduction, see Smith et al. (2022).
The sacroml
package provides:
- A variety of privacy attacks for assessing machine learning models.
- The safemodel package: a suite of open source wrappers for common machine learning frameworks, including scikit-learn and Keras. It is designed for use by researchers in Trusted Research Environments (TREs) where disclosure control methods must be implemented. Safemodel aims to give researchers greater confidence that their models are more compliant with disclosure control.
Installation
Install sacroml
and manually copy the examples
.
To install only the base package, which includes the attacks used for assessing privacy:
$ pip install sacroml
To additionally install the safemodel package:
$ pip install sacroml[safemodel]
Note: macOS users may need to install libomp due to a dependency on XGBoost:
$ brew install libomp
Running
See the examples
.
Acknowledgement
This work was funded by UK Research and Innovation under Grant Numbers MC_PC_21033 and MC_PC_23006 as part of Phase 1 of the DARE UK (Data and Analytics Research Environments UK) programme, delivered in partnership with Health Data Research UK (HDR UK) and Administrative Data Research UK (ADR UK). The specific projects were Semi-Automatic checking of Research Outputs (SACRO; MC_PC_23006) and Guidelines and Resources for AI Model Access from TrusTEd Research environments (GRAIMATTER; MC_PC_21033).This project has also been supported by MRC and EPSRC [grant number MR/S010351/1]: PICTURES.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.