Calibrate differentially private algorithms to operational privacy risk measures
Project description
riskcal
⚠️ This is a research prototype. Avoid or be extra careful when using in production.
The library provides tools for calibrating the noise scale in (epsilon, delta)-DP mechanisms to one of the two notions of operational attack risk (attack accuracy/advantage, or attack TPR and FPR) instead of the (epsilon, delta) parameters, as well as for efficient measurement of these notions. The library enables to reduce the noise scale at the same level of targeted attack risk.
Using the Library
Install with:
pip install riskcal
Quickstart
Measuring f-DP / Getting the Trade-Off Curve for any DP Mechanism
To measure the attack trade-off curve (equivalent to attack's receiver-operating curve) for DP-SGD, you can run
import riskcal
import numpy as np
noise_multiplier = 0.5
sample_rate = 0.002
num_steps = 10000
alpha = np.array([0.01, 0.05, 0.1])
beta = riskcal.dpsgd.get_beta_for_dpsgd(
alpha=alpha,
noise_multiplier=noise_multiplier,
sample_rate=sample_rate,
num_steps=num_steps,
)
The library also provides an opacus-compatible account which uses the Connect the Dots accounting from Google's DP accounting library, with extra methods to get the trade-off curve and advantage. Thus, the above snippet is equivalent:
import riskcal
import numpy as np
noise_multiplier = 0.5
sample_rate = 0.002
num_steps = 10000
acct = riskcal.dpsgd.CTDAccountant()
for _ in range(num_steps):
acct.step(noise_multiplier=noise_multiplier, sample_rate=sample_rate)
alpha = np.array([0.01, 0.05, 0.1])
beta = acct.get_beta(alpha=alpha)
You can also get the trade-off curve for any DP mechanism supported by Google's DP accounting library, given its privacy loss distribution (PLD) object:
import riskcal
import numpy as np
from dp_accounting.pld.privacy_loss_distribution import from_gaussian_mechanism
from dp_accounting.pld.privacy_loss_distribution import from_laplace_mechanism
pld = from_gaussian_mechanism(1.0).compose(from_laplace_mechanism(0.1))
alpha = np.array([0.01, 0.05, 0.1])
beta = riskcal.conversions.get_beta_from_pld(pld, alpha=alpha)
Calibrating DP-SGD to attack FNR/FPR
To calibrate noise scale in DP-SGD to a given advantage, run:
import riskcal
sample_rate = 0.002
num_steps = 10000
noise_multiplier = riskcal.dpsgd.find_noise_multiplier_for_advantage(
advantage=0.1,
sample_rate=sample_rate,
num_steps=num_steps
)
To calibrate noise scale in DP-SGD to a given attack FPR (beta) and FNR (alpha), run:
import riskcal
sample_rate = 0.002
num_steps = 10000
noise_multiplier = riskcal.dpsgd.find_noise_multiplier_for_err_rates(
beta=0.2,
alpha=0.01,
sample_rate=sample_rate,
num_steps=num_steps
)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file riskcal-1.0.0.tar.gz
.
File metadata
- Download URL: riskcal-1.0.0.tar.gz
- Upload date:
- Size: 9.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.10.12 Linux/6.8.0-40-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 08e205d2618c6f3045ac7b0ee6da6d1a5fa724994ed51b6e15c341d393e2d534 |
|
MD5 | b1011a64a68def15154dde834198d5d5 |
|
BLAKE2b-256 | 45007ffe161abb7fc3851f723bb9056b9fdae9fe1726860c1814ea4fff07b9d9 |
File details
Details for the file riskcal-1.0.0-py3-none-any.whl
.
File metadata
- Download URL: riskcal-1.0.0-py3-none-any.whl
- Upload date:
- Size: 10.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: poetry/1.7.1 CPython/3.10.12 Linux/6.8.0-40-generic
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 42259f29b2b078b507bb7706da9acd9d13924ee42548c73def6658be7fed9eff |
|
MD5 | 0e4420981f40f37b87f4bf62917ef235 |
|
BLAKE2b-256 | 309c6cd68f233a67f3d0b3308f9f7e99038d18a20dcfe4fd5d11332f36e7b807 |