Production-ready ML fairness auditing with bias detection and mitigation
Project description
Bias & Fairness Auditor
Production-ready ML fairness auditing with bias detection, mitigation strategies, and regulatory compliance checking.
Features
- Fairness Metrics: 16+ metrics including demographic parity, equalized odds, disparate impact
- Data Bias Detection: Selection bias, representation bias, label bias, proxy detection
- Model Auditing: Comprehensive fairness analysis across protected groups
- Bias Mitigation: Pre-processing, in-processing, and post-processing strategies
- Compliance Checking: ECOA, EEOC, GDPR fairness, EU AI Act support
- Intersectional Analysis: Multi-attribute fairness evaluation
- Zero Dependencies Core: Works without heavy ML libraries
Installation
pip install bias-fairness-auditor # Core (zero deps)
pip install bias-fairness-auditor[ml] # With ML libraries
pip install bias-fairness-auditor[full] # All features
Quick Start
Basic Model Audit
from bias_fairness_auditor import FairnessAuditor, AuditConfig
# Configure audit
config = AuditConfig(
protected_attributes=["gender", "race"],
fairness_threshold=0.8
)
auditor = FairnessAuditor(config)
# Audit model predictions
report = auditor.audit_model(
predictions=[1, 0, 1, 1, 0, 1, 0, 0],
actuals=[1, 0, 1, 0, 0, 1, 1, 0],
protected_attributes={
"gender": ["M", "F", "M", "F", "M", "F", "M", "F"]
}
)
print(f"Overall fairness: {report.overall_fairness_score:.2%}")
for score in report.fairness_scores:
status = "✓" if score.is_fair else "✗"
print(f"{status} {score.metric.value}: {score.value:.3f}")
Data Bias Analysis
from bias_fairness_auditor import FairnessAuditor
auditor = FairnessAuditor()
data = [
{"gender": "M", "age": 35, "income": 75000, "approved": True},
{"gender": "F", "age": 28, "income": 65000, "approved": False},
# ... more records
]
report = auditor.audit_data(
data=data,
label_column="approved",
protected_columns=["gender", "age"]
)
print(f"Bias instances found: {len(report.bias_instances)}")
for bias in report.bias_instances:
print(f" {bias.severity.value}: {bias.description}")
Bias Mitigation
from bias_fairness_auditor import FairnessAuditor, MitigationType
auditor = FairnessAuditor()
# Apply reweighting
result = auditor.mitigate(
strategy=MitigationType.REWEIGHTING,
data=training_data,
labels=labels,
protected_column="gender"
)
weights = result.metadata["weights"]
# Use weights in model training
Compliance Checking
from bias_fairness_auditor import (
FairnessAuditor, AuditConfig, ComplianceStandard
)
config = AuditConfig(
compliance_standards=[
ComplianceStandard.ECOA,
ComplianceStandard.EEOC
]
)
auditor = FairnessAuditor(config)
result = auditor.full_audit(data, "label", predictions, ["gender"])
for compliance in result.compliance_results:
status = "PASS" if compliance.is_compliant else "FAIL"
print(f"{compliance.standard.value}: {status}")
Fairness Metrics
| Metric | Description | Threshold |
|---|---|---|
| Demographic Parity | Equal positive rates | ≥ 0.8 |
| Disparate Impact | 4/5 rule compliance | ≥ 0.8 |
| Equalized Odds | Equal TPR and FPR | ≤ 0.2 diff |
| Equal Opportunity | Equal TPR | ≤ 0.2 diff |
| Predictive Parity | Equal precision | ≤ 0.2 diff |
| Calibration | Probability accuracy | ECE ≤ 0.1 |
Bias Types Detected
- Selection Bias
- Sampling Bias
- Measurement Bias
- Label Bias
- Historical Bias
- Representation Bias
- Proxy Bias
- Algorithmic Bias
API Reference
FairnessAuditor
auditor = FairnessAuditor(config: Optional[AuditConfig] = None)
# Data audit
data_report = auditor.audit_data(data, label_column, protected_columns)
# Model audit
model_report = auditor.audit_model(predictions, actuals, protected_attributes)
# Full audit
result = auditor.full_audit(data, label_column, predictions, protected_columns)
# Mitigation
mitigation = auditor.mitigate(strategy, data, labels, protected_column)
AuditConfig
config = AuditConfig(
protected_attributes=[ProtectedAttribute.GENDER],
privileged_groups={ProtectedAttribute.GENDER: ["M"]},
unprivileged_groups={ProtectedAttribute.GENDER: ["F"]},
fairness_metrics=[FairnessMetric.DEMOGRAPHIC_PARITY],
fairness_threshold=0.8,
disparate_impact_threshold=0.8,
compliance_standards=[ComplianceStandard.ECOA]
)
License
MIT License - Pranay M
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bias_fairness_auditor-0.1.0.tar.gz.
File metadata
- Download URL: bias_fairness_auditor-0.1.0.tar.gz
- Upload date:
- Size: 16.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
d947265a9a686afbf3f9f62f54dea2d0ca7e76552a6aa6f6a594e9bff285196e
|
|
| MD5 |
c1fbc90759f0555767d7a6f7198f0b06
|
|
| BLAKE2b-256 |
e79ff04cb619953d4112191bea7a0942df83a873feebbc286c0895d67ab2ae30
|
File details
Details for the file bias_fairness_auditor-0.1.0-py3-none-any.whl.
File metadata
- Download URL: bias_fairness_auditor-0.1.0-py3-none-any.whl
- Upload date:
- Size: 13.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.13.5
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
2a1217c91b10649037f35d683fc17772a0bb5da5245bdda0d18565c76bba48c5
|
|
| MD5 |
6a6d829c359574cd1068ba678b5c388e
|
|
| BLAKE2b-256 |
7248a541dbb9631af4ee9f80b499524ea798e4112d62b82ce90c4d6ba699e6a4
|