Methodology for the systematic evaluation of label noise correction methods for ML fairness.
Project description
Systematic analysis of the impact of label noise correction on ML Fairness
This Python package provides an implementation of the empirical methodology to systematically evaluate the effectiveness of label noise correction techniques in ensuring the fairness of models trained on biased datasets, proposed in [1]. The methodology involves manipulating the amount of label noise and can be used with fairness benchmarks but also with standard ML datasets. Experiment tracking is done using mlflow
.
Installation
You can install the package using pip
:
pip install fair_lnc_evaluation
Usage
Examples of how to use this package can be found on the examples
folder.
References
Contributing
Contributions to this package are welcome! If you have any bug reports, feature requests, or would like to contribute with code improvements, please submit an issue or a pull request on the GitHub repository.
License
This package is distributed under the MIT License.
Feel free to modify and expand upon this README.md template according to your specific package and the algorithms you implement.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for fair_lnc_evaluation-0.0.1.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 82f693b754f875ecf60e071d19e73ebdbbc4fe007ddbacd9c4dcaa2110db0281 |
|
MD5 | e709b5aed7291f5032778e83e59a4ad9 |
|
BLAKE2b-256 | 43a34bad53d4ce8243881917f1b79752370a8b398756df34af1609230e983cec |
Hashes for fair_lnc_evaluation-0.0.1-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 48459ee81bf5d9a6d8e2d9607203b803f470bf8f3f2292eefd47afb5914bec3f |
|
MD5 | 84deaa5f5ff543f23f393bc5a8298f62 |
|
BLAKE2b-256 | 70d46d668f51a9790c5ad9f46ed374cd289ee1d5db8f169870208595f9621b79 |