Skip to main content

Fill missing values in DataFrames with Restricted Boltzmann Machines

Project description

Fill missing values in a pandas DataFrame using a Restricted Boltzmann Machine.

Provides a class implementing the scikit-learn transformer interface for creating and training a Restricted Boltzmann Machine. This can then be sampled from to fill in missing values in training data or new data of the same format. Utility functions for applying the transformations to a pandas DataFrame are provided, with the option to treat columns as either continuous numerical or categorical features.

Installation

pip install boltzmannclean

Usage

To fill in missing values from a DataFrame with the minimum of fuss, a cleaning function is provided.

import boltzmannclean

my_clean_dataframe = boltzmannclean.clean(
    dataframe=my_dataframe,
    numerical_columns=['Height', 'Weight'],
    categorical_columns=['Colour', 'Shape'],
    tune_rbm=True  # tune RBM hyperparameters for my data
)

To create and use the underlying scikit-learn transformer.

my_rbm = boltzmannclean.RestrictedBoltzmannMachine(
    n_hidden=100, learn_rate=0.01,
    batchsize=10, dropout_fraction=0.5, max_epochs=1,
    adagrad=True
)

my_rbm.fit_transform(a_numpy_array)

Here the default RBM hyperparameters are those listed above, and the numpy array operated on is expected to be composed entirely of numbers in the range [0,1] or np.nan/None. The hyperparameters are:

  • n_hidden: the size of the hidden layer

  • learn_rate: learning rate for stochastic gradient descent

  • batchsize: batchsize for stochastic gradient descent

  • dropout_fraction: fraction of hidden nodes to be dropped out on each backward pass during training

  • max_epochs: maximum number of passes over the training data

  • adagrad: whether to use the Adagrad update rules for stochastic gradient descent

Example

import boltzmannclean
import numpy as np
import pandas as pd
from sklearn import datasets

iris = datasets.load_iris()

df_iris = pd.DataFrame(iris.data,columns=iris.feature_names)
df_iris['target'] = pd.Series(iris.target, dtype=str)

df_iris.head()

_

sepal length (cm)

sepal width (cm)

petal length (cm)

petal width (cm)

target

0

5.1

3.5

1.4

0.2

0

1

4.9

3.0

1.4

0.2

0

2

4.7

3.2

1.3

0.2

0

3

4.6

3.1

1.5

0.2

0

4

5.0

3.6

1.4

0.2

0

Add some noise:

noise = [(0,1),(2,0),(0,4)]

for noisy in noise:
    df_iris.iloc[noisy] = None

df_iris.head()

_

sepal length (cm)

sepal width (cm)

petal length (cm)

petal width (cm)

target

0

5.1

NaN

1.4

0.2

None

1

4.9

3.0

1.4

0.2

0

2

NaN

3.2

1.3

0.2

0

3

4.6

3.1

1.5

0.2

0

4

5.0

3.6

1.4

0.2

0

Clean the DataFrame:

df_iris_cleaned = boltzmannclean.clean(
    dataframe=df_iris,
    numerical_columns=[
        'sepal length (cm)', 'sepal width (cm)',
        'petal length (cm)', 'petal width (cm)'
    ],
    categorical_columns=['target'],
    tune_rbm=True
)

df_iris_cleaned.round(1).head()

_

sepal length (cm)

sepal width (cm)

petal length (cm)

petal width (cm)

target

0

5.1

3.3

1.4

0.2

0

1

4.9

3.0

1.4

0.2

0

2

6.3

3.2

1.3

0.2

0

3

4.6

3.1

1.5

0.2

0

4

5.0

3.6

1.4

0.2

0

The larger and more correlated the dataset is, the better the imputed values will be.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

boltzmannclean-0.1.2.tar.gz (6.8 kB view details)

Uploaded Source

File details

Details for the file boltzmannclean-0.1.2.tar.gz.

File metadata

  • Download URL: boltzmannclean-0.1.2.tar.gz
  • Upload date:
  • Size: 6.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.22.0 setuptools/42.0.1.post20191125 requests-toolbelt/0.9.1 tqdm/4.39.0 CPython/3.6.9

File hashes

Hashes for boltzmannclean-0.1.2.tar.gz
Algorithm Hash digest
SHA256 e4b4ed0a7ae9c1f3dfb9283c87fd1c846d801934f16afcf0e09ce5f24bc8d073
MD5 29cb7c1e8788f79dfeaa67f77d73d1d7
BLAKE2b-256 ac70c37f6e44394f27e6c349234dcd6127ac162e8f9206ce6fdc5c12fad21972

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page