Skip to main content

Calibrate differentially private algorithms to operational privacy risk measures

Project description

riskcal

CI arXiv


⚠️ This is a research prototype. Avoid or be extra careful when using in production.


The library provides tools for calibrating the noise scale in (epsilon, delta)-DP mechanisms to one of the two notions of operational attack risk (attack accuracy/advantage, or attack TPR and FPR) instead of the (epsilon, delta) parameters, as well as for efficient measurement of these notions. The library enables to reduce the noise scale at the same level of targeted attack risk.

Using the Library

Install with:

pip install riskcal

Quickstart

Measuring f-DP / Getting the Trade-Off Curve for any DP Mechanism

To measure the attack trade-off curve (equivalent to attack's receiver-operating curve) for DP-SGD, you can run

import riskcal
import numpy as np

noise_multiplier = 0.5
sample_rate = 0.002
num_steps = 10000

alpha = np.array([0.01, 0.05, 0.1])
beta = riskcal.dpsgd.get_beta_for_dpsgd(
    alpha=alpha,
    noise_multiplier=noise_multiplier,
    sample_rate=sample_rate,
    num_steps=num_steps,
)

The library also provides an opacus-compatible account which uses the Connect the Dots accounting from Google's DP accounting library, with extra methods to get the trade-off curve and advantage. Thus, the above snippet is equivalent:

import riskcal
import numpy as np

noise_multiplier = 0.5
sample_rate = 0.002
num_steps = 10000

acct = riskcal.dpsgd.CTDAccountant()
for _ in range(num_steps):
    acct.step(noise_multiplier=noise_multiplier, sample_rate=sample_rate)

alpha = np.array([0.01, 0.05, 0.1])
beta  = acct.get_beta(alpha=alpha)

You can also get the trade-off curve for any DP mechanism supported by Google's DP accounting library, given its privacy loss distribution (PLD) object:

import riskcal
import numpy as np

from dp_accounting.pld.privacy_loss_distribution import from_gaussian_mechanism
from dp_accounting.pld.privacy_loss_distribution import from_laplace_mechanism 

pld = from_gaussian_mechanism(1.0).compose(from_laplace_mechanism(0.1))

alpha = np.array([0.01, 0.05, 0.1])
beta = riskcal.conversions.get_beta_from_pld(pld, alpha=alpha)
Calibrating DP-SGD to attack FNR/FPR

To calibrate noise scale in DP-SGD to a given advantage, run:

import riskcal

sample_rate = 0.002
num_steps = 10000

noise_multiplier = riskcal.dpsgd.find_noise_multiplier_for_advantage(
    advantage=0.1,
    sample_rate=sample_rate,
    num_steps=num_steps
)

To calibrate noise scale in DP-SGD to a given attack FPR (beta) and FNR (alpha), run:

import riskcal

sample_rate = 0.002
num_steps = 10000

noise_multiplier = riskcal.dpsgd.find_noise_multiplier_for_err_rates(
    beta=0.2,
    alpha=0.01,
    sample_rate=sample_rate,
    num_steps=num_steps
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

riskcal-1.0.0.tar.gz (9.1 kB view hashes)

Uploaded Source

Built Distribution

riskcal-1.0.0-py3-none-any.whl (10.2 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page