Skip to main content

Classifier Comparison and Evaluation Tools

Project description

CLASSIC

Classifier Comparison and Evaluation Tools

A python package for the analysis of classification performance.

Applications

Critical Difference Diagram

A main application is the comparison of categorized paired metric data, e.g. accuracy results of different classification methods in machine learning. Therefore Classic implements the critical difference diagram (described in [1]).

Example

Imagine that we have five different classification methods tested on 14 different datasets. Every classifiers returns an accuracy result on each test set in the corresponding dataset. We collect the results in a table like this:

Classifier
A 0.60 0.81 0.62 0.19 0.93 0.54 0.53 0.41 0.21 0.97 0.32 0.82 0.38 0.75
B 0.33 0.68 0.43 0.23 0.90 0.43 0.32 0.20 0.22 0.86 0.21 0.82 0.41 0.73
C 0.25 0.64 0.40 0.10 0.85 0.39 0.31 0.19 0.18 0.90 0.23 0.78 0.43 0.71
D 0.64 0.84 0.60 0.26 0.95 0.60 0.36 0.37 0.19 0.95 0.44 0.84 0.41 0.84
E 0.37 0.68 0.47 0.18 0.88 0.37 0.27 0.25 0.24 0.79 0.25 0.83 0.36 0.64

We load this table in a numpy array of shape (5, 14) and call the function classic.critical_difference_diagram. The resulting plot can be seen below.

critical difference diagram

Markings on this number line represent the average ranks of one classifier based on his accuracy over all datasets. The lowest rank corresponds to the highest accuracy. Classifiers are connected by a horizontal line if they do not have a significant difference. This significance is based on post-hoc Wilcoxon signed rank tests for each pair of classifiers.

Therefore, it seems like classifier D is the best choice for the overall classification task. It works best on the 14 chosen datasets, altough it's not the best classfier for every single dataset on its own. But we can also see, that there is no significant (alpha=0.05) difference in the accuracy results of classifier D and A. If D would be much more computationally expensive than A, then we should consider choosing A as the better classifier.

Scatter Matrix

For an in-depth comparison of the classifiers on single datasets a special type of scatter matrix that is designed to compare multiple categories of data can be found in Classic.

Example

For a more elaborate decision in the example above we could directly compare the best three classifiers A, B and D using the function classic.scatter_comparison.

scatter comparison

Points above the diagonal line represent datasets that are better classified by the method in the upper left corner. A horizontal and vertical line indicates the mean accuracy of the corresponding classifier. A solid line marks the higher mean. A choice can now be easily made for the comparison of classifier A and B as well as B and D. We also see that D is better than A in mean accuracy but that A has a big advantage on one dataset that is well beyond the diagonal line for five percent difference. The datasets could now be further analyzed by, for example, looking at the number of training and test instances. An option for setting the opacity value of points in the scatterplots accordingly is available.

Feature Score

Evaluating the importance of features in data can be very helpful in reducing the dimensionality of the feature space. While principal component analysis transforms original features into new ones, it can also be used to creating a ranking of those features. Classic is able to compute the feature score for a given dataset.

Example

We can analyze the importance of the four features in the well-known IRIS dataset by using the method classic.plot_feature_score.

The plot shows the normalized feature score. The 'most important' feature based on that score is 'petal length (cm)'. All other features then have a relatively low score, e.g. sepal length's score is about 80% lower. The red markings show the accuracy results of a ridge classifier where only the first n features (in descending score order) are used (n gets incremented with each step on the x-axis).

References

[1] Demšar, Janez (2006). "Statistical comparisons of classifiers over multiple data sets." The Journal of Machine learning research 7, 1-30.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

classically-1.0.0.tar.gz (11.7 kB view details)

Uploaded Source

Built Distribution

classically-1.0.0-py3-none-any.whl (12.9 kB view details)

Uploaded Python 3

File details

Details for the file classically-1.0.0.tar.gz.

File metadata

  • Download URL: classically-1.0.0.tar.gz
  • Upload date:
  • Size: 11.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.9.10 Windows/10

File hashes

Hashes for classically-1.0.0.tar.gz
Algorithm Hash digest
SHA256 20482b42e687381a2eef697a17bdb087965c2b6ce8226700ba676f3e98956338
MD5 e5c4e8d36f20e878420ca0ee32010963
BLAKE2b-256 40572ce7efa8c2e2dbec7694d44a4694488d54765a0e38fda5b83bd559cc1050

See more details on using hashes here.

Provenance

File details

Details for the file classically-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: classically-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 12.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.1.13 CPython/3.9.10 Windows/10

File hashes

Hashes for classically-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 903b83c43bc5d06db1f62484158dc5acef2ee47a760c272c66de107eaecfc74a
MD5 3232ba68d347e0c0995e54d1e8db4a66
BLAKE2b-256 8ab658154c2ae09054f3bde55125d007e803b6d6d4a0b31f06bbe5459b3002ed

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page