Hotelling's T-squared (T2) for process monitoring and MYT decomposition
Python implementation of Hotelling's T-squared (T2) for process monitoring and MYT decomposition.
Table of Contents
- Specific Implementation
- How TSquared is related to t-test?
- How TSquared is related to Mahalanobis Distance?
- How TSquared is related to MCD?
- Should I use PCA with TSquared?
- Can I apply TSquared to any kind of process? What are the conditions on parameters to use TSquared?
- Should I clean dataset before training? Is there a procedure to clean the data?
- What variables cause the outlier? What is MYT decomposition?
- How deviation types impact TSquared?
- Is a TSquared monitoring sufficient? Or do I still need univariate monitoring?
- UCL, what does that mean in multivariate context? How to compute UCL?
- My data are not normally distributed. Does it help to apply a Box-Cox transformation on each variables?
- Classical multivariate T2 chart in which Hotelling's T2 statistic is computed as a distance of a multivariate observation from the multivariate mean scaled by the covariance matrix of the variables
- Python scikit-learn -like implementation
- Efficient with large datasets
- MYT decomposition
- Python (>= 3.6)
TSquared can be installed from PyPI:
pip install tsquared
Hotelling's T2 is initially for sampled distribution comparison to a reference distribution, known as a generalization of the t-statistic for multivariate hypothesis testing.
For monitoring, only a single multivariate observation is compared to a reference distribution.
from tsquared import HotellingT2 clf = HotellingT2() # Hotelling's T2 without dataset cleaning. clf.fit(X_train) test_scores = clf.score_samples(X_test) # Hotelling's T2 with one-pass cleaning. clf.cleanfit(X_train, iter=1) respg = multivariate_normality(X_train, alpha=.05) test_scores = clf.score_samples(X_test_norm) # Hotelling's T2 with two-pass cleaning. clf.cleanfit(X_train, iter=2) respg=multivariate_normality(X_train, alpha=.05) test_scores = clf.score_samples(X_test_norm) # Hotelling's T2 with infinite-pass cleaning (set a high number, size criteria will stop iterations). clf.cleanfit(X_train, iter=100) test_scores = clf.score_samples(X_test_norm) # Hotelling's T2 with smart cleaning (decision based on multivariate normality coefficient). clf.cleanfit(X_train) test_scores = clf.score_samples(X_test, alpha=.05)
Hotelling's T2 is a generalization of the t-statistic for multivariate hypothesis testing When a single multivariate observation is compared to a reference distribution, it can be viewed as a generalization of the z-score. The difference is the nature of the entities (point >< distribution) that are considered in the distance computation and in the denominator of the equation also.
What's the relationship with z-score then?
X is in this case the observation (point) in the multivariate space.
The covariance matrix of the reference multivariate distribution is formed by covariance terms between each dimensions and by variance terms (square of standard deviations) on the diagonal.
MCD = minimum covariance determinant is an algorithm available in the Outlier Detection framework pyOD MCD is based on Mahalanobis Squared Distance (MSD =~ Hotelling's T2) Based on the distribution of MSD, the training consists to find the subgroup of points ($h < n$) that minimizes the covariance determinant. This subgroup of points can be thought of as the minimum number of points which must not be outlying (with $h > n/2$ to keep a sufficient number of point)
⟹ It is equivalent to the cleaning operation in TSquared.
Yes, you can!
But this should be done cautiously
- PCA defines new coordonates for each points
- PCA is often used to reduce dimensionality by selecting the strongest "principal" components defining the underlying relation between variables
- T2 score on all PCA components = T2 on all original variables
Can we apply T2 on a reduced number of (principal) components? Let's try a 2D example. In the following picture, the relation between Var1 and Var2 is mostly linear, these variables are strongly correlated. Let's suppose that the 1st component of the PCA is sufficient to define the relation, component 2 being the noisy part of the relation.
In this case, monitoring any futur observation is like applying a z-score (1 dimension) to this observation compared to the distribution of all past observations projected on the first component axis.
If a loss of correlation happened between Var1 and Var2, it won't be seen on this univariate monitoring because it is the second component that will be impacted. This can happened if the sensor capturing Var2 is defectuous.
By extension to more dimensions, we understand that reducing "blindly" the number of components before a TSquared monitoring is not advised. It is certainly not a thing to do in case of sensors validation.
Instead, if PCA is used to reduce the dimensionnality, it is advised to monitor as well the residual group of components in a separated monitoring.
The basic assumption is that all variables should be normally distributed. However, the algorithm is tolerant to some extent if the distributions are not perfectly normal.
Yes, the cleaner the better
The TSquared procedure can be applied 1 or 2 times to the training set and outliers can be filtered at each round.
The risk to work with a training set not clean is to have an univariate outlier which is an inlier in multivariate, the multivariate UCL being too large (Observation n°78).
My data are not normally distributed. Does it help to apply a Box-Cox transformation on each variables?
The experiment was done using TSquared autocleaning function and Box-Cox transformation on each variables.
Decomposition of T2 for Multivariate Control Chart Interpretation, ROBERT L. MASON, NOLA D. TRACY and JOHN C. YOUNG
Application of Multivariate Statistical Quality Control In Pharmaceutical Industry, Mesut ULEN, Ibrahim DEMIR
Identifying Variables Contributing to Outliers in Phase I, ROBERT L. MASON, YOUN-MIN CHOU, AND JOHN C. YOUNG
Multivariate Control Charts for Individual Observations, NOLA D. TRACY, JOHN C. YOUNG, ROBERT L. MASON
Release history Release notifications | RSS feed
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.