Skip to main content

Mercury's robust is a library to perform robust testing on models and/or datasets..

Project description

mercury-robust

Mercury project at BBVA

Mercury is a collaborative library that was developed by the Advanced Analytics community at BBVA. Originally, it was created as an InnerSource project but after some time, we decided to release certain parts of the project as Open Source. That's the case with the mercury-robust package.

If you're interested in learning more about the Mercury project, we recommend reading this blog post from www.bbvaaifactory.com

Introduction to mercury-robust

mercury-robust is a Python library designed for performing robust testing on machine learning models and datasets. It helps ensure that data workflows and models are robust against certain conditions, such as data drift, label leaking or the input data schema, by raising an exception when they fail. This library is intended for data scientists, machine learning engineers and anyone interested in ensuring the performance and robustness of their models in productive environments.

Importance of ML Robustness

Errors or misbehaviours in machine learning models and datasets can have significant consequences, especially in sensitive domains such as healthcare or finance. It is important to ensure that model performance is align to what was measured in testing environemnts and robust to prevent harm to individuals or organizations that rely on them. mercury-robust helps ensure the robustness of machine learning models and datasets by providing a modular framework for performing tests.

Types of Tests

mercury-robust provides two main types of tests: Data Tests and Model Tests. In addition, all tests can be added to a container class called TestSuite

Data Tests

Data Tests receive a dataset as the main input argument and check different conditions. For example, the `CohortPerformanceTest checks whether some metrics perform poorly for some cohorts of data when compared to other groups. This is particularly relevant for measuring fairness in sensitive variables.

Model Tests

Model Tests involve data in combination with a machine learning model. For example, the ModelSimplicityChecker evaluates if a simple baseline, trained in the same dataset, gives better or similar performance to a given model. It is used to check if added complexity contributes significantly to improve the model.

if the complexisty of the model is adecuate to the model measures the importance of every input feature and fails if the model has input features that add very marginal contribution.

TestSuite

This class provides an easy way to group tests and execute them together. Here's an example of a TestSuite that checks for input features that add very marginal importance to the model, the existence of linear combinations in those features, or some kind of data drift:

from mercury.robust.model_tests import ModelSimplicityChecker
from mercury.robust.data_tests import LinearCombinationsTest, DriftTest
from mercury.robust.suite import TestSuite

# Create some tests
complexity_test = ModelSimplicityChecker(
    model = model,
    X_train = X_train,
    y_train = y_train,
    X_test = X_test,
    y_test = y_test,
    threshold = 0.02,
    eval_fn = roc_auc_score
)
drift_test = DriftTest(df_test, train_schma, name="drift_train_test")
lin_comb_test = LinearCombinationsTest(df_train)

# Create the TestSuite with the tests
test_suite = TestSuite(
    tests=[complexity_test, drift_test, lin_comb_test], run_safe=True
)
# Obtain results
test_results = test_suite.get_results_as_df()

Cheatsheet

To help you get started with using mercury-robust, we've created a cheatsheet that summarizes the main features and methods of the library. You can download the cheatsheet from here: RobustCheatsheet.pdf

User installation

The easiest way to install mercury-robust is using pip:

pip install -U mercury-robust

Help and support

This library is currently maintained by a dedicated team of data scientists and machine learning engineers from BBVA.

Documentation

website: https://bbva.github.io/mercury-robust/

Email

mercury.group@bbva.com

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mercury_robust-0.0.3.tar.gz (53.6 kB view details)

Uploaded Source

Built Distribution

mercury_robust-0.0.3-py3-none-any.whl (44.1 kB view details)

Uploaded Python 3

File details

Details for the file mercury_robust-0.0.3.tar.gz.

File metadata

  • Download URL: mercury_robust-0.0.3.tar.gz
  • Upload date:
  • Size: 53.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.9.19

File hashes

Hashes for mercury_robust-0.0.3.tar.gz
Algorithm Hash digest
SHA256 5449b9238c3385c2a1f2e9d293bdd6cf31d5ff3a47225d1437caecfe3c6ff8e2
MD5 551d9eaf38a3193b90d3f3b540556ce3
BLAKE2b-256 641ce7ff0c790d3ccf64734be803b9a7f69211f00c02edb2cb2bf946c4cd81a7

See more details on using hashes here.

File details

Details for the file mercury_robust-0.0.3-py3-none-any.whl.

File metadata

File hashes

Hashes for mercury_robust-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 fd3f3c4e8ecb29c74fcce7744f0384124899c6fe78d0b29afd4c14d3e3fd6705
MD5 a7377652cb530f100301316e373aad6f
BLAKE2b-256 f367393185ffe717f4a04eccb6d08846f46ba9fc9eb9539b127fe8973141265d

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page