Skip to main content

Compute f-DP trade-off curves and calibrate differentially private algorithms to operational privacy risk measures

Project description

riskcal

CI arXiv


⚠️ This is a research prototype. Avoid or be extra careful when using in production.


The library provides tools for computing the f-DP trade-off curves for common differentially private algorithms, and calibrating their noise scale to notions of operational privacy risk (attack accuracy/advantage, or attack TPR and FPR) instead of the (epsilon, delta) parameters. The library enables to reduce the noise scale at the same level of targeted attack risk.

References

The library implements methods described in the associated paper, published at NeurIPS 2024:

  • The direct method for computing the trade-off curve based on privacy loss random variables is described in Algorithm 1.
  • The mapping between f-DP and operational privacy risk, and the idea of direct noise calibration to risk instead of the standard calibration to a given (epsilon, delta) is described in Sections 2 and 3.

If you make use of the library or methods, please cite:

@article{kulynych2024attack,
  title={Attack-aware noise calibration for differential privacy},
  author={Kulynych, Bogdan and Gomez, Juan F and Kaissis, Georgios and du Pin Calmon, Flavio and Troncoso, Carmela},
  journal={Advances in Neural Information Processing Systems},
  volume={37},
  pages={134868--134901},
  year={2024}
}

Using the Library

Install with:

pip install riskcal

Quickstart

Computing f-DP / Getting the Trade-Off Curve for a DP Mechanism

To measure the attack trade-off curve (equivalent to attack's receiver-operating curve) for DP-SGD, you can run

import riskcal
import numpy as np

noise_multiplier = 0.5
sample_rate = 0.002
num_steps = 10000

alpha = np.array([0.01, 0.05, 0.1])
beta = riskcal.dpsgd.get_beta_for_dpsgd(
    alpha=alpha,
    noise_multiplier=noise_multiplier,
    sample_rate=sample_rate,
    num_steps=num_steps,
)

The library also provides an opacus-compatible account which uses the Connect the Dots accounting from Google's DP accounting library, with extra methods to get the trade-off curve and advantage. Thus, the above snippet is equivalent:

import riskcal
import numpy as np

noise_multiplier = 0.5
sample_rate = 0.002
num_steps = 10000

acct = riskcal.dpsgd.CTDAccountant()
for _ in range(num_steps):
    acct.step(noise_multiplier=noise_multiplier, sample_rate=sample_rate)

alpha = np.array([0.01, 0.05, 0.1])
beta  = acct.get_beta(alpha=alpha)

You can also get the trade-off curve for any DP mechanism supported by Google's DP accounting library, given its privacy loss distribution (PLD) object:

import riskcal
import numpy as np

from dp_accounting.pld.privacy_loss_distribution import from_gaussian_mechanism
from dp_accounting.pld.privacy_loss_distribution import from_laplace_mechanism

pld = from_gaussian_mechanism(1.0).compose(from_laplace_mechanism(0.1))

alpha = np.array([0.01, 0.05, 0.1])
beta = riskcal.conversions.get_beta_from_pld(pld, alpha=alpha)
Calibrating DP-SGD to attack FNR/FPR

To calibrate noise scale in DP-SGD to a given advantage, run:

import riskcal

sample_rate = 0.002
num_steps = 10000

noise_multiplier = riskcal.dpsgd.find_noise_multiplier_for_advantage(
    advantage=0.1,
    sample_rate=sample_rate,
    num_steps=num_steps
)

To calibrate noise scale in DP-SGD to a given attack FPR (beta) and FNR (alpha), run:

import riskcal

sample_rate = 0.002
num_steps = 10000

noise_multiplier = riskcal.dpsgd.find_noise_multiplier_for_err_rates(
    beta=0.2,
    alpha=0.01,
    sample_rate=sample_rate,
    num_steps=num_steps
)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

riskcal-1.2.1.tar.gz (13.5 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

riskcal-1.2.1-py3-none-any.whl (16.4 kB view details)

Uploaded Python 3

File details

Details for the file riskcal-1.2.1.tar.gz.

File metadata

  • Download URL: riskcal-1.2.1.tar.gz
  • Upload date:
  • Size: 13.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.5

File hashes

Hashes for riskcal-1.2.1.tar.gz
Algorithm Hash digest
SHA256 28c2c0abfc2b15eea6c158d83d9c407c67a46f0d1783ae742399821ee181f14c
MD5 6dfb5d70530f9f5df472ffca6701be57
BLAKE2b-256 1986f749f0ad2d5f690b61f198da068f422a7db738488d223900e3ac8cd7f8ff

See more details on using hashes here.

File details

Details for the file riskcal-1.2.1-py3-none-any.whl.

File metadata

  • Download URL: riskcal-1.2.1-py3-none-any.whl
  • Upload date:
  • Size: 16.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: uv/0.9.5

File hashes

Hashes for riskcal-1.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 4c9890adcbc35de182862502d997dc7aaaeeb7f3332dfacc51d8cbfea1dfc5ae
MD5 8fb8cbb0eeb6cb3f6d96cc7a006e99d9
BLAKE2b-256 7bbda9d819eff93041886ccca7a6d8877023c9c0aba532365a09fb21876901bd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page