No project description provided
Project description
Holistic AI: building trustworthy AI systems
Holistic AI is an open-source library dedicated to assessing and improving the trustworthiness of AI systems. We believe that responsible AI development requires a comprehensive evaluation across multiple dimensions, beyond just accuracy.
Current Capabilities
Holistic AI currently focuses on five verticals of AI trustworthiness:
- Bias: measure and mitigate bias in AI models.
- Explainability: measure into model behavior and decision-making.
- Robustness: measure model performance under various conditions.
- Security: measure the privacy risks associated with AI models.
- Efficacy: measure the effectiveness of AI models.
Quick Start
pip install holisticai # Basic installation
pip install holisticai[bias] # Bias mitigation support
pip install holisticai[explainability] # For explainability metrics and plots
pip install holisticai[all] # Install all packages for bias and explainability
# imports
from holisticai.bias.metrics import classification_bias_metrics
from holisticai.datasets import load_dataset
from holisticai.bias.plots import bias_metrics_report
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
# load an example dataset and split
dataset = load_dataset('law_school', protected_attribute="race")
dataset_split = dataset.train_test_split(test_size=0.3)
# separate the data into train and test sets
train_data = dataset_split['train']
test_data = dataset_split['test']
# rescale the data
scaler = StandardScaler()
X_train_t = scaler.fit_transform(train_data['X'])
X_test_t = scaler.transform(test_data['X'])
# train a logistic regression model
model = LogisticRegression(random_state=42, max_iter=500)
model.fit(X_train_t, train_data['y'])
# make predictions
y_pred = model.predict(X_test_t)
# compute bias metrics
metrics = classification_bias_metrics(
group_a = test_data['group_a'],
group_b = test_data['group_b'],
y_true = test_data['y'],
y_pred = y_pred
)
# create a comprehensive report
bias_metrics_report(model_type='binary_classification', table_metrics=metrics)
Key Features
- Comprehensive Metrics: Measure various aspects of AI system trustworthiness, including bias, fairness, and explainability.
- Mitigation Techniques: Implement strategies to address identified issues and improve the fairness and robustness of AI models.
- User-Friendly Interface: Intuitive API for easy integration into existing workflows.
- Visualization Tools: Generate insightful visualizations for better understanding of model behavior and bias patterns.
Documentation and Tutorials
Detailed Installation
Troubleshooting (macOS):
Before installing the library, you may need to install these packages:
brew install cbc pkg-config
python -m pip install cylp
brew install cmake
Contributing
We welcome contributions from the community To learn more about contributing to Holistic AI, please refer to our Contributing Guide.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
holisticai-1.0.9.tar.gz
(10.8 MB
view hashes)
Built Distribution
holisticai-1.0.9-py3-none-any.whl
(424.4 kB
view hashes)
Close
Hashes for holisticai-1.0.9-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 00e9d4324ce067ce814385b3b8cf33bb2357f7fa66c7a7f8670df954c4c5462b |
|
MD5 | 6f6b1189c179076f66996f6402d11fff |
|
BLAKE2b-256 | bd385e2f71e7f9debff07b179776a4f9c07be366b75bcf03319b70d61d34f17d |