Skip to main content

A toolkit for minimizing the data required to apply machine learning

Project description

The EU General Data Protection Regulation (GDPR) mandates the principle of data minimization, which requires that only data necessary to fulfill a certain purpose be collected. However, it can often be difficult to determine the minimal amount of data required, especially in complex machine learning models such as neural networks.

This toolkit is a first-of-a-kind implementation to help reduce the amount of personal data needed to perform predictions with a machine learning model, by removing or generalizing some of the input features. The type of data minimization this toolkit focuses on is the reduction of the number and/or granularity of features collected for analysis.

The generalization process basically searches for several similar records and groups them together. Then, for each feature, the individual values for that feature within each group are replaced with a represenataive value that is common across the whole group. This process is done while using knowledge encoded within the model to produce a generalization that has little to no impact on its accuracy.

The minimization-toolkit is compatible with: Python 3.7.

Official ai-minimization-toolkit documentation

Using the minimization-toolkit

The main class, GeneralizeToRepresentative, is a scikit-learn compatible Transformer, that receives an existing estimator and labeled training data, and learns the generalizations that can be applied to any newly collected data for analysis by the original model. The fit() method learns the generalizations and the transform() method applies them to new data.

It is also possible to export the generalizations as feature ranges.

The current implementation supports only numeric features, so any categorical features must be transformed to a numeric representation before using this class.

Start by training your machine learning model. In this example, we will use a DecisionTreeClassifier, but any scikit-learn model can be used. We will use the iris dataset in our example.

from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier

dataset = datasets.load_iris()
X_train, X_test, y_train, y_test = train_test_split(dataset.data, dataset.target, test_size=0.2)

base_est = DecisionTreeClassifier()
base_est.fit(X_train, y_train)

Now create the GeneralizeToRepresentative transformer and train it. Supply it with the original model and the desired target accuracy. The training process may receive the original labeled training data or the model’s predictions on the data.

predictions = base_est.predict(X_train)
gen = GeneralizeToRepresentative(base_est, target_accuracy=0.9)
gen.fit(X_train, predictions)

Now use the transformer to transform new data, for example the test data.

transformed = gen.transform(X_test)

The transformed data has the same columns and formats as the original data, so it can be used directly to derive predictions from the original model.

new_predictions = base_est.predict(transformed)

To export the resulting generalizations, retrieve the Transformer’s _generalize parameter.

generalizations = base_est._generalize

The returned object has the following structure:

{
  ranges:
  {
    list of (<feature name>: [<list of values>])
  },
  untouched: [<list of feature names>]
}

For example:

{
  ranges:
  {
    age: [21.5, 39.0, 51.0, 70.5],
    education-years: [8.0, 12.0, 14.5]
  },
  untouched: ["occupation", "marital-status"]
}

Where each value inside the range list represents a cutoff point. For example, for the age feature, the ranges in this example are: <21.5, 21.5-39.0, 39.0-51.0, 51.0-70.5, >70.5. The untouched list represents features that were not generalized, i.e., their values should remain unchanged.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

ai_minimization_toolkit-0.0.1-py3-none-any.whl (13.0 kB view details)

Uploaded Python 3

File details

Details for the file ai_minimization_toolkit-0.0.1-py3-none-any.whl.

File metadata

  • Download URL: ai_minimization_toolkit-0.0.1-py3-none-any.whl
  • Upload date:
  • Size: 13.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.21.0 setuptools/45.2.0 requests-toolbelt/0.9.1 tqdm/4.42.1 CPython/3.7.1

File hashes

Hashes for ai_minimization_toolkit-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 f57882a16ae5bb0050bb8ff6cf0183ce85494bc42bd33aa5a5cfae5f1b170f27
MD5 001b9fcf8cfdd3fcf039b7632d194e35
BLAKE2b-256 9b7a33ad1ee462149be1a71748312502b723df58d9a5923b0a1d2c09e694447a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page