Trust for Machine Learning
Project description
TrustML is a modular and extensible package to support the definition, assessment and monitoring of custom-built trustworthiness indicators for AI models. TrustML allows data scientists to define trustworthiness indicators by selecting a set of metrics from a catalog of trustworthy-related metrics and grouping them into higher-level metric aggregations.
TrustML also provides different assessment methods to compute and monitor the indicators previously defined. TrustML enables and supports the development of trustworthy AI models, aiming to provide assistance not only during their construction phase, but also in production environments, as a mechanism to continuously monitor such trust and enable mitigation activities when required.
The package makes use of existing packages meant to compute each of the included trustworthiness metrics, check the requirements.txt file in the root of the source code repository for details.
The API documentation is available on https://martimanzano.github.io/TrustML/. The wiki with tutorials on the package's usage and extension is available on https://github.com/martimanzano/TrustML/wiki/Home/.
The TrustML package is free software distributed under the Apache License 2.0. If you are interested in participating in this project, please use the GitHub repository; and review the Contributing page, the Code of Conduct and the specific Contributing articles all contributions are welcomed.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.