Python Framework to calibrate confidence estimates of classifiers like Neural Networks
Project description
Calibration Framework
Calibration framework in Python 3 for Neural Networks.
Copyright (C) 2019 Ruhr West University of Applied Sciences, Bottrop, Germany AND Visteon Electronics Germany GmbH, Kerpen, Germany
This Source Code Form is subject to the terms of the Mozilla Public License, v. 2.0. If a copy of the MPL was not distributed with this file, You can obtain one at http://mozilla.org/MPL/2.0/.
Overview
This framework is designed to calibrate the confidence estimates of classifiers like Neural Networks. Modern Neural Networks are likely to be overconfident with their predictions. However, reliable confidence estimates of such classifiers are crucial especially in safety-critical applications.
For example: given 100 predictions with a confidence of 80% of each prediction, the observed accuracy should also match 80% (neither more nor less). This behaviour is achievable with several calibration methods.
The framework is structured as follows:
netcal .binning # binning methods .scaling # scaling methods .regularization # regularization methods .presentation # presentation methods .metrics # metrics for measuring miscalibration examples # example code snippets
Installation
The installation of the calibration suite is quite easy with setuptools. You can either install this framework using PIP:
pip3 install netcal
Or simply invoke the following command to install the calibration suite:
python3 setup.py install
Calibration Metrics
The most common metric to determine miscalibration is the Expected Calibration Error (ECE) [1]. This metric divides the confidence space into several bins and measures the observed accuracy in each bin. The bin gaps between observed accuracy and bin confidence are summed up and weighted by the amount of samples in each bin. The Maximum Calibration Error (MCE) denotes the highest gap over all bins. The Average Calibration Error (ACE) [11] denotes the average miscalibration where each bin gets weighted equally.
Another group are the regularization tools which are added to the loss during the training of a Neural Network.
Methods
The calibration methods are separated into binning and scaling methods. The binning methods divide the confidence space into several bins (like ECE) and perform calibration on each bin. The scaling methods scale the confidence estimates or logits directly to calibrated confidence estimates.
Most of the calibration methods are designed for binary classification tasks. Multi-class calibration is performed in “one vs. all” by default.
Some methods like “Isotonic Regression” utilize methods from the scikit-learn API [9].
Binning
Implemented binning methods are:
Scaling
Implemented scaling methods are:
Regularization
Implemented regularization methods are:
Confidence Penalty [8]
Visualization
For visualization of miscalibration, one can use a Confidence Histograms & Reliability Diagrams. These diagrams are similar to ECE, the output space is divided into equally spaced bins. The calibration gap between bin accuracy and bin confidence is visualized as a histogram.
Examples
The calibration methods work with the predicted confidence estimates of a Neural Network. This is a basic example which uses softmax predictions of a classification task with 10 classes and the given NumPy arrays:
ground_truth # this is a NumPy 1-D array with ground truth digits between 0-9 - shape: (n_samples,) confidences # this is a NumPy 2-D array with confidence estimates between 0-1 - shape: (n_samples, n_classes)
This is an example for Temperature Scaling but also works for every calibration method (remind different constructor parameters):
import numpy as np from netcal.scaling import TemperatureScaling temperature = TemperatureScaling() temperature.fit(confidences, ground_truth) calibrated = temperature.transform(confidences)
The miscalibration can be determined with the ECE:
from netcal.metrics import ECE n_bins = 10 ece = ECE(n_bins) uncalibrated_score = ece.measure(confidences) calibrated_score = ece.measure(calibrated)
The miscalibration can be visualized with a Reliability Diagram:
from netcal.presentation import ReliabilityDiagram n_bins = 10 diagram = ReliabilityDiagram(n_bins) diagram.plot(confidences, ground_truth) # visualize miscalibration of uncalibrated diagram.plot(calibrated, ground_truth) # visualize miscalibration of calibrated
References
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.