Skip to main content

Methodology for the systematic evaluation of label noise correction methods for ML fairness.

Project description

Systematic analysis of the impact of label noise correction on ML Fairness

This Python package provides an implementation of the empirical methodology to systematically evaluate the effectiveness of label noise correction techniques in ensuring the fairness of models trained on biased datasets, proposed in [1]. The methodology involves manipulating the amount of label noise and can be used with fairness benchmarks but also with standard ML datasets. Experiment tracking is done using mlflow.

Installation

You can install the package using pip:

pip install fair_lnc_evaluation

Usage

Examples of how to use this package can be found on the examples folder.

References

Contributing

Contributions to this package are welcome! If you have any bug reports, feature requests, or would like to contribute with code improvements, please submit an issue or a pull request on the GitHub repository.

License

This package is distributed under the MIT License.


Feel free to modify and expand upon this README.md template according to your specific package and the algorithms you implement.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

fair_lnc_evaluation-0.0.1.tar.gz (50.1 kB view hashes)

Uploaded Source

Built Distribution

fair_lnc_evaluation-0.0.1-py3-none-any.whl (11.8 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page