Drop-in encrypted (CKKS) Fairlearn metrics. Same API surface; ciphertext arithmetic via TenSEAL.
Project description
fairlearn-fhe
Drop-in encrypted Fairlearn metrics. Identical API surface; ciphertext arithmetic over CKKS via TenSEAL.
fairlearn-fhe is an early-stage project maintained at
https://github.com/BAder82t/fairlearn-fhe.
# plaintext
from fairlearn.metrics import demographic_parity_difference
disp = demographic_parity_difference(y_true, y_pred, sensitive_features=A)
# encrypted (one import change)
from fairlearn_fhe.metrics import demographic_parity_difference
from fairlearn_fhe import build_context, encrypt
ctx = build_context()
y_p_enc = encrypt(ctx, y_pred)
disp = demographic_parity_difference(y_true, y_p_enc, sensitive_features=A)
disp is numerically equivalent to the plaintext result within CKKS noise tolerance (< 1e-4 abs error in default settings).
Trust models
Two modes are supported. The default ports the regaudit-fhe convention; the second goes further.
Mode A — encrypted predictions, plaintext sensitive features (default)
- Encrypted:
y_pred. - Plaintext:
y_true,sensitive_features, group counts. - Cost: one ct×pt multiply + slot-sum per group → depth 1.
Mode B — fully-encrypted predictions and sensitive features
from fairlearn_fhe import build_context, encrypt, encrypt_sensitive_features
from fairlearn_fhe.metrics import demographic_parity_difference
ctx = build_context()
y_pred_enc = encrypt(ctx, y_pred)
sf_enc = encrypt_sensitive_features(ctx, sensitive_features, y_true=y_true)
disp = demographic_parity_difference(y_true, y_pred_enc, sensitive_features=sf_enc)
- Encrypted:
y_pred, the per-row group-membership masks. - Plaintext (auditor metadata): group counts, per-group positive/negative counts (passed via
y_true=at encryption time). - Cost: ct×ct + ct×pt + slot-sum per group → depth 2.
y_true remains plaintext in both modes (it is the auditor's ground truth). The denominators of TPR/FPR-style metrics — per-group positive/negative counts — are always revealed: there is no fairness signal without them.
Per-group rates are decrypted at the audit boundary; final aggregation (max, min, ratio, difference) runs on those K plaintext scalars.
Supported metrics
| Plaintext name | Encrypted? | Mechanism |
|---|---|---|
selection_rate |
yes | sum(y_pred·mask)/n_g |
true_positive_rate |
yes | sum(y_pred·y_true·mask)/sum(y_true·mask) |
true_negative_rate |
yes | sum((1-y_pred)·(1-y_true)·mask)/sum((1-y_true)·mask) |
false_positive_rate |
yes | sum(y_pred·(1-y_true)·mask)/sum((1-y_true)·mask) |
false_negative_rate |
yes | sum((1-y_pred)·y_true·mask)/sum(y_true·mask) |
mean_prediction |
yes | sum(y_pred·mask)/n_g |
demographic_parity_difference |
yes | max-min selection_rate over groups |
demographic_parity_ratio |
yes | min/max selection_rate over groups |
equalized_odds_difference |
yes | max(tpr_diff, fpr_diff) |
equalized_odds_ratio |
yes | min(tpr_ratio, fpr_ratio) |
equal_opportunity_difference |
yes | tpr max-min |
equal_opportunity_ratio |
yes | tpr min/max |
Plus MetricFrame.fhe() returning an EncryptedMetricFrame.
Backends
Two CKKS backends share a single API:
from fairlearn_fhe import build_context
ctx_tenseal = build_context(backend="tenseal") # default; pip-installable
ctx_openfhe = build_context(backend="openfhe") # native OpenFHE backend, opt-in
Benchmarked on n=1024, 3 sensitive groups, depth-6 circuit:
| backend | ctx build | encrypt | dp_diff | dp abs err | eo_diff | eo abs err |
|---|---|---|---|---|---|---|
| tenseal | 888 ms | 7.5 ms | 284 ms | 1e-7 | 562 ms | 2e-7 |
| openfhe | 321 ms | 13.5 ms | 505 ms | 2e-10 | 1015 ms | 4e-11 |
On the included benchmark, OpenFHE gives lower numeric error; TenSEAL is faster per metric and ships via pip on every supported platform.
Install
pip install fairlearn-fhe # tenseal backend
pip install fairlearn-fhe[openfhe] # add openfhe backend (requires C++ build)
pip install fairlearn-fhe[signing] # add Ed25519 envelope signing helpers
Verify an audit envelope without importing an FHE backend:
fairlearn-fhe-verify envelope.json
License
Apache-2.0. Compatible with Fairlearn (MIT).
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file fairlearn_fhe-0.2.1.tar.gz.
File metadata
- Download URL: fairlearn_fhe-0.2.1.tar.gz
- Upload date:
- Size: 73.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ed725d0e1cb86cdfd271a74054227a3e2d75c61b142d42ceca0064f13943fd53
|
|
| MD5 |
e20b79953e3c7f83c7254bd76cc31e70
|
|
| BLAKE2b-256 |
96e814c85d8e3b23579664d4cb907d93c3d7b1fffd34cc2cdd3c3157bde74a7d
|
Provenance
The following attestation bundles were made for fairlearn_fhe-0.2.1.tar.gz:
Publisher:
publish.yml on BAder82t/fairlearn-fhe
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
fairlearn_fhe-0.2.1.tar.gz -
Subject digest:
ed725d0e1cb86cdfd271a74054227a3e2d75c61b142d42ceca0064f13943fd53 - Sigstore transparency entry: 1394304726
- Sigstore integration time:
-
Permalink:
BAder82t/fairlearn-fhe@8b4799d19609e60280099e10286a650cad664da7 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/BAder82t
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@8b4799d19609e60280099e10286a650cad664da7 -
Trigger Event:
release
-
Statement type:
File details
Details for the file fairlearn_fhe-0.2.1-py3-none-any.whl.
File metadata
- Download URL: fairlearn_fhe-0.2.1-py3-none-any.whl
- Upload date:
- Size: 49.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ddcc379277b85b2b8c339bf365b53b69fb6a311f69c9cfada109ec54dfbdacec
|
|
| MD5 |
253eadf58391f315d558567f7a999258
|
|
| BLAKE2b-256 |
d5951376e98aa7845ea6281ab6439c08bb490cece90838767244ab872c74c23b
|
Provenance
The following attestation bundles were made for fairlearn_fhe-0.2.1-py3-none-any.whl:
Publisher:
publish.yml on BAder82t/fairlearn-fhe
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
fairlearn_fhe-0.2.1-py3-none-any.whl -
Subject digest:
ddcc379277b85b2b8c339bf365b53b69fb6a311f69c9cfada109ec54dfbdacec - Sigstore transparency entry: 1394304738
- Sigstore integration time:
-
Permalink:
BAder82t/fairlearn-fhe@8b4799d19609e60280099e10286a650cad664da7 -
Branch / Tag:
refs/tags/v0.2.1 - Owner: https://github.com/BAder82t
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@8b4799d19609e60280099e10286a650cad664da7 -
Trigger Event:
release
-
Statement type: