Oracle Guardian AI Open Source Project
Project description
Oracle Guardian AI Open Source Project
Oracle Guardian AI Open Source Project is a library consisting of tools to assess fairness/bias and privacy of machine learning models and data sets. This package contains fairness
and privacy_estimation
modules.
The Fairness
module offers tools to help you diagnose and understand the unintended bias present in your dataset and model so that you can make steps towards more inclusive and fair applications of machine learning.
The Privacy Estimation
module helps estimate potential leakage of sensitive information in the training data through attacks on Machine Learning (ML) models. The main idea is to carry out Membership Inference Attacks on a given target model trained on a given sensitive dataset, and measure their success to estimate the risk of leakage.
Installation
You have various options when installing oracle-guardian-ai
.
Installing the oracle-guardian-ai base package
python3 -m pip install oracle-guardian-ai
Installing extras libraries
The all-optional
module will install all optional dependencies. Note the single quotes around installation of extra libraries.
python3 -m pip install 'oracle-guardian-ai[all-optional]'
To work with fairness/bias, install the fairness
module. You can find extra dependencies in requirements-fairness.txt.
python3 -m pip install 'oracle-guardian-ai[fairness]'
To work with privacy estimation, install the privacy
module. You can find extra dependencies in requirements-privacy.txt.
python3 -m pip install 'oracle-guardian-ai[privacy]'
Documentation
- Oracle Guardian AI Documentation
- OCI Data Science and AI services Examples
- Oracle AI & Data Science Blog
Examples
Measurement with a Fairness Metric
from guardian_ai.fairness.metrics import ModelStatisticalParityScorer
fairness_score = ModelStatisticalParityScorer(protected_attributes='<target_attribute>')
Bias Mitigation
from guardian_ai.fairness.bias_mitigation import ModelBiasMitigator
bias_mitigated_model = ModelBiasMitigator(
model,
protected_attribute_names='<target_attribute>',
fairness_metric="statistical_parity",
accuracy_metric="balanced_accuracy",
)
bias_mitigated_model.fit(X_val, y_val)
bias_mitigated_model.predict(X_test)
Contributing
This project welcomes contributions from the community. Before submitting a pull request, please review our contribution guide.
Find Getting Started instructions for developers in README-development.md.
Security
Consult the security guide SECURITY.md for our responsible security vulnerability disclosure process.
License
Copyright (c) 2023 Oracle and/or its affiliates. Licensed under the Universal Permissive License v1.0.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file oracle_guardian_ai-1.2.0.tar.gz
.
File metadata
- Download URL: oracle_guardian_ai-1.2.0.tar.gz
- Upload date:
- Size: 53.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 93fba655dc5ec87914b713653223600505c7559609785b5571446e86d2c4d556 |
|
MD5 | 1a496c689b11f12dac7934cd3aeff088 |
|
BLAKE2b-256 | 111072fe263a88c88b0688e28898589e89271861704c0ff5bdfa13ca62312345 |
File details
Details for the file oracle_guardian_ai-1.2.0-py3-none-any.whl
.
File metadata
- Download URL: oracle_guardian_ai-1.2.0-py3-none-any.whl
- Upload date:
- Size: 68.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.15
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6529072707353ff8034361da7860963a503a65248af279675625babd5053f702 |
|
MD5 | a3fc477412bf131335a550357aa357e5 |
|
BLAKE2b-256 | 397f6712bd8e43195491c12b9a0a6c36c674090d5c38f376ed2b488fcc548a76 |