Skip to main content

Custom Metrics for ML Model Monitoring

Project description

MonitoringCustomMetrics

MonitoringCustomMetrics is a code package that simplifies the creation of metrics to use for monitoring Machine Learning files. We follow the formats and standards defined by Amazon SageMaker Model Monitor. It can be executed locally by using Docker, or it can be used within a SageMaker Processing Job.

What does it do?

This tool helps you monitor the quality of ML models with metrics that are not present in Amazon SageMaker Model Monitor. We follow SageMaker standards for metric output:

  • Statistics file: raw statistics calculated per column/feature. They are calculated for the baseline and also for the current input being analyzed.
  • Constraints file: these are the constraints that a dataset must satisfy. The constraints are used to determine if the dataset has violations when running an evaluation job.
  • Constraint violations file: generated as the output of a monitor execution. It contains the list of constraints evaluated (using a provided constraints file) against the dataset being analyzed.

To avoid filename conflicts with SageMaker Monitor output, our files are renamed to:

  • community_statistics.json
  • community_constraints.json
  • community_constraint_violations.json

Operation modes

The package has two operation modes:

  • Suggest baseline: as the name implies, this operation mode will suggest a baseline that you can later use for evaluating statistics. It will generate "statistics" and "constraints" files. You will need to provide the input file(s) to be evaluated. In case of Model Quality metrics, a "parameters.json" file is also needed in order to specify the metrics to evaluate and any additional required parameter.
  • Run monitor: it evaluates the input file(s) using the constraints provided. It will generate a "constraint_violations" file.

It can perform both Data Quality and Model Quality analyses. The input can be a single file, or it can be split into multiple files.

Data Quality

Data Quality analysis will evaluate all the existing metrics against all the columns. Based on the inferred column type, the package will run either "numerical" or "string" metrics on a given column.

Model Quality

Model Quality analysis will only evaluate metrics specified in the configuration file provided.

Known limitations

  • The code runs on a single machine. If running on a SageMaker Processing Job, it will be limited to the capacity of a single instance.
  • Pandas loads data in memory. Choose a host that can handle the amount data you need to process.
  • MonitoringCustomMetrics expects the input file(s) to be in CSV format (comma-separated files).

Running the package locally

MonitoringCustomMetrics can be executed locally. You will need to install Docker CLI, set the needed parameters in the Dockerfile, and provide the required input file(s).

Prerequisites

Before running locally, you will need to install Docker CLI:

https://docs.docker.com/get-started/get-docker/

https://docs.docker.com/reference/cli/docker/

Environment variables

The package uses the following variables:

  • analysis_type: specifies the type of analysis to do.
    • Possible values:
      • DATA_QUALITY.
      • MODEL_QUALITY.
    • Required: Yes.
  • baseline_statistics: specifies the container path to the baseline statistics file.
    • Required: only if you want to evaluate statistics. Not required when suggesting baseline.
  • baseline_constraints: specifies the container path to the baseline constraints file.
  • Required: only if you want to evaluate statistics. Not required when suggesting baseline.

Model Quality specific environment variables:

  • config_path: specifies the container path to the configuration file.

    • Required: only for Model Quality metrics. You need to specify the metric(s) to use, as well as any required parameter.
  • problem_type: problem type for the analysis.

    • Required: Yes.
    • Possible values:
      • BinaryClassification
      • Regression
      • MulticlassClassification
  • To specify that this is a Data Quality analysis:

    ENV analysis_type=DATA_QUALITY
    
  • To specify that this is a Model Quality analysis:

    ENV analysis_type=MODEL_QUALITY
    
  • If you want to evaluate statistics, you also need to provide the location of statistics and constraints files inside the container. If these files are not provided, the package will suggest a baseline instead.

    ENV baseline_statistics=/opt/ml/processing/baseline/statistics/community_statistics.json
    ENV baseline_constraints=/opt/ml/processing/baseline/constraints/community_constraints.json
    

Model Quality specific environment variables

For Model Quality, 'config_path' is also required:

config_path specifies the location of the "parameters" file within the container.

ENV config_path=/opt/ml/processing/input/parameters

Depending on the metrics to use, these variables might be needed also:

ENV problem_type=<problem type>
ENV ground_truth_attribute=<ground truth attribute column>
ENV inference_attribute=<inference attribute column>

Model Quality parameters file

Only the metrics specified in the "parameters" file will be evaluated in a Model Quality job. The parameters file is structured as a map, with the top-level representing the metric names to use. For example:

{
  "prc_auc": {
    "threshold_override": 55
  }
}

would mean that the job will only evaluate the "prc_auc" metric, and it will pass parameter "threshold_override" with value "55".

Providing input files

The container also needs certain files to do the analysis. You can put your files in the "local_resources" directory. Once the files are present, you need to add the following statements to the Dockerfile to have them copied over to the container:

  • Copy the input data file. Input data can be split across multiple files if needed:

    COPY local_resources/data_quality/input.csv /opt/ml/processing/input/data
    
  • Copy statistics and constraints files, if needed:

    COPY local_resources/model_quality/community_constraints.json /opt/ml/processing/baseline/constraints
    COPY local_resources/model_quality/community_statistics.json /opt/ml/processing/baseline/statistics
    
  • Copy "parameters" file, if needed (only needed for Model Monitoring metrics):

    COPY local_resources/model_quality/binary_classification/custom_metric/parameters.json /opt/ml/processing/input/parameters
    

Running the container locally

Add the required parameters to the Dockerfile in the section specified. It should look something like:

##### Parameters for running locally should be put here: #####################################
ENV analysis_type=DATA_QUALITY
ENV baseline_statistics=/opt/ml/processing/baseline/statistics/community_statistics.json
ENV baseline_constraints=/opt/ml/processing/baseline/constraints/community_constraints.json
COPY local_resources/data_quality/input.csv /opt/ml/processing/input/data
COPY local_resources/data_quality/community_constraints.json /opt/ml/processing/baseline/constraints
COPY local_resources/data_quality/community_statistics.json /opt/ml/processing/baseline/statistics
##### End of Parameters for running locally ###########################################################################################

You can now execute the container by using the Shell script "run_local.sh":

./run_local.sh

You should see the output of your container in the terminal:

Executing entry point:                                                                                                                                                                                                                                                                                                        
---------------- BEGINNING OF CONTAINER EXECUTION ----------------------
Starting Monitoring Custom Metrics
Retrieving data from path: /opt/ml/processing/input/data
  Reading data from file: /opt/ml/processing/input/data
Finished retrieving data from path: /opt/ml/processing/input/data
Determining operation to run based on provided parameters ...
Determining monitor type ...
Monitor type detected based on 'analysis_type' environment variable
Operation type: OperationType.run_monitor
Monitor type: MonitorType.DATA_QUALITY
<class 'pandas.core.frame.DataFrame'>
...

The output files will be available in the "local_output" folder after the execution.

Running the package in SageMaker

In order to use this package in a SageMaker Processing Job, you will need to:

  • Containerize the code using Docker.
  • Create an ECR Repo for MonitoringCustomMetrics.
  • Upload the container to your ECR Repo.
  • Start a SageMaker Processing Job using the container image uploaded to your ECR Repo.

(More details are still pending).

Available metrics

Data Quality

Metric name Description Data type
sum Example metric that sums up an entire column's data Numerical
email Example metric to verify that a field is not an email String

Model Quality

Metric name Description Output data type Parameters
brier_score_loss The Brier score measures the mean squared difference between the predicted probability and the actual outcome. Reference: https://scikit-learn.org/stable/modules/generated/sklearn.metrics.brier_score_loss.html Numerical
  • ground_truth_attribute: [required] str. Model target attribute.
  • probability_attribute: [required] str. Model inference attribute
  • threshold_override:[optional] float. Set constraint as baseline value + threshold_override.
gini GINI is a model performance metric commonly used in Credit Science. It measures the ranking power of a model and it ranges from 0 to 1: 0 means no ranking power while 1 means perfect ranking power Numerical
  • ground_truth_attribute: [required] str. Model target attribute.
  • probability_attribute: [required] str. Model inference attribute.
  • threshold_override:[optional] float. Set constraint as baseline value + threshold_override.
pr_auc PR AUC is the area under precision-recall curve. Reference: https://scikit-learn.org/stable/auto_examples/model_selection/plot_precision_recall.html Numerical
  • ground_truth_attribute: [required] str. Model target attribute.
  • probability_attribute: [required] str. Model inference attribute.
  • threshold_override:[optional] float. Set constraint as baseline value + threshold_override.
score_diff Score difference measures the absolute/relative difference between predicted probability and the actual outcome. Numerical
  • ground_truth_attribute: [required] str. Model target attribute.
  • probability_attribute: [required] str. Model inference attribute.
  • comparison_type: [optional] str. "absolute" to calculate absolute difference and "relative" to calculate relative difference. Default value is "absolute".
  • two_sided: [optional] bool. Default value is False:
    • two_sided = True will set the constraint and violation policy by the absolute value of the score difference to enable the detection of both under-prediction and over-prediction at the same time. The absolute value of score difference will be returned.
    • two_sided = False will set the constraint and violation policy by the original value of the score difference.
  • comparison_operator: [optional] str. configure comparison_operator when two_sided is set as False. "GreaterThanThreshold" to detect over-prediction and "LessThanThreshold" to detect under-prediction.
  • threshold_override:[optional] float. Set constraint as baseline value + threshold_override.

How to implement additional metrics

Each metric is defined in its own class file. The file must be created in the right folder, based on the metric type:

  • data_quality
    • numerical
    • string
  • model_quality
    • binary_classification
    • multiclass_classification
    • regression

Unit tests

Metrics must also have a unit test file in the "test" folder, following the same structure.

Metric class conventions

  • A metric must inherit from an Abstract Base Class (ABC) called "ModelQualityMetric".
  • The class must include the following methods:
    • calculate_statistics.
    • suggest_constraints.
    • evaluate_constraints.
  • At the end of the class, the file must expose a variable called "instance", which is an instance of the class itself.

Please refer to the existing metrics for additional details.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

MonitoringCustomMetrics-1.0.2.tar.gz (20.4 kB view details)

Uploaded Source

File details

Details for the file MonitoringCustomMetrics-1.0.2.tar.gz.

File metadata

File hashes

Hashes for MonitoringCustomMetrics-1.0.2.tar.gz
Algorithm Hash digest
SHA256 be621550807b773a85534e23177d7123a1462fdaa593d86a2659bb37c2d062bc
MD5 614473f486d328faf074e4ca6b224d6f
BLAKE2b-256 decfdc61131d8831626c4560f90a631f42779a003df1a9ea22adae201678b082

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page