Oracle Guardian AI Open Source Project
Project description
Oracle Guardian AI Open Source Project
Oracle Guardian AI Open Source Project is a library consisting of tools to assess fairness/bias and privacy of machine learning models and data sets. This package contains fairness
and privacy_estimation
modules.
The Fairness
module offers tools to help you diagnose and understand the unintended bias present in your dataset and model so that you can make steps towards more inclusive and fair applications of machine learning.
The Privacy Estimation
module helps estimate potential leakage of sensitive information in the training data through attacks on Machine Learning (ML) models. The main idea is to carry out Membership Inference Attacks on a given target model trained on a given sensitive dataset, and measure their success to estimate the risk of leakage.
Installation
You have various options when installing oracle-guardian-ai
.
Installing the oracle-guardian-ai base package
python3 -m pip install oracle-guardian-ai
Installing extras libraries
The all-optional
module will install all optional dependencies. Note the single quotes around installation of extra libraries.
python3 -m pip install 'oracle-guardian-ai[all-optional]'
To work with fairness/bias, install the fairness
module. You can find extra dependencies in requirements-fairness.txt.
python3 -m pip install 'oracle-guardian-ai[fairness]'
To work with privacy estimation, install the privacy
module. You can find extra dependencies in requirements-privacy.txt.
python3 -m pip install 'oracle-guardian-ai[privacy]'
Documentation
- Oracle Guardian AI Documentation
- OCI Data Science and AI services Examples
- Oracle AI & Data Science Blog
Examples
Measurement with a Fairness Metric
from guardian_ai.fairness.metrics import ModelStatisticalParityScorer
fairness_score = ModelStatisticalParityScorer(protected_attributes='<target_attribute>')
Bias Mitigation
from guardian_ai.fairness.bias_mitigation import ModelBiasMitigator
bias_mitigated_model = ModelBiasMitigator(
model,
protected_attribute_names='<target_attribute>',
fairness_metric="statistical_parity",
accuracy_metric="balanced_accuracy",
)
bias_mitigated_model.fit(X_val, y_val)
bias_mitigated_model.predict(X_test)
Contributing
This project welcomes contributions from the community. Before submitting a pull request, please review our contribution guide.
Find Getting Started instructions for developers in README-development.md.
Security
Consult the security guide SECURITY.md for our responsible security vulnerability disclosure process.
License
Copyright (c) 2023 Oracle and/or its affiliates. Licensed under the Universal Permissive License v1.0.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file oracle_guardian_ai-1.1.0.tar.gz
.
File metadata
- Download URL: oracle_guardian_ai-1.1.0.tar.gz
- Upload date:
- Size: 52.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.10.14
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4a1de2cea3073a4652e9ff4d3db6ad8940786cd173641a5ce14934a8fc6585a9 |
|
MD5 | c76d1b7037c02e23b7797ace7cacf2d9 |
|
BLAKE2b-256 | 4e7de2c7bcf2bc55be759435a43674ebec7a1edac3a56d364813059faf940573 |
File details
Details for the file oracle_guardian_ai-1.1.0-py3-none-any.whl
.
File metadata
- Download URL: oracle_guardian_ai-1.1.0-py3-none-any.whl
- Upload date:
- Size: 67.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.10.14
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 0c9b6115bd0345f0953d6ba36edb556cd70091732be1ffa3340d18d76f974b79 |
|
MD5 | 7387f1b456dc7eb50fca42faeefdd595 |
|
BLAKE2b-256 | 5f9522e94e1c2e37b154e46f20e8052b125d5902c3e32fc9d78756704ed25ad7 |