Fills missing values in a pandas DataFrame using a Restricted Boltzmann Machine.
Project description
sherlockml-boltzmannclean
=========================
Fill missing values in a pandas DataFrame using a Restricted Boltzmann Machine.
Provides a class implementing the scikit-learn transformer interface for creating and training a Restricted Boltzmann Machine. This can then be sampled from to fill in missing values in training data or new data of the same format. Utility functions for applying the transformations to a pandas DataFrame are provided, with the option to treat columns as either continuous numerical or categorical features.
Installation
------------
.. code-block:: bash
pip install sherlockml-boltzmannclean
Usage
-----
To fill in missing values from a DataFrame with the minimum of fuss, a cleaning function is provided.
.. code-block:: python
import boltzmannclean
my_clean_dataframe = boltzmannclean.clean(
dataframe=my_dataframe,
numerical_columns=['Height', 'Weight'],
categorical_columns=['Colour', 'Shape'],
tune_rbm=True # tune RBM hyperparameters for my data
)
To create and use the underlying scikit-learn transformer.
.. code-block:: python
my_rbm = boltzmannclean.RestrictedBoltzmannMachine(
n_hidden=100, learn_rate=0.01,
batchsize=10, dropout_fraction=0.5, max_epochs=1,
adagrad=True
)
my_rbm.fit_transform(a_numpy_array)
Here the default RBM hyperparameters are those listed above, and the numpy array operated on is expected to be composed entirely of numbers in the range [0,1] or np.nan/None. The hyperparameters are:
- *n_hidden*: the size of the hidden layer
- *learn_rate*: learning rate for stochastic gradient descent
- *batchsize*: batchsize for stochastic gradient descent
- *dropout_fraction*: fraction of hidden nodes to be dropped out on each backward pass during training
- *max_epochs*: maximum number of passes over the training data
- *adagrad*: whether to use the Adagrad update rules for stochastic gradient descent
Example
-------
.. code-block:: python
import boltzmannclean
import numpy as np
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
df_iris = pd.DataFrame(iris.data,columns=iris.feature_names)
df_iris['target'] = pd.Series(iris.target, dtype=str)
df_iris.head()
.. raw:: html
<embed>
<table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>sepal length (cm)</th> <th>sepal width (cm)</th> <th>petal length (cm)</th> <th>petal width (cm)</th> <th>target</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>5.1</td> <td>3.5</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>1</th> <td>4.9</td> <td>3.0</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>2</th> <td>4.7</td> <td>3.2</td> <td>1.3</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>3</th> <td>4.6</td> <td>3.1</td> <td>1.5</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>4</th> <td>5.0</td> <td>3.6</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> </tbody></table>
</embed>
Add some noise:
.. code-block:: python
noise = [(0,1),(2,0),(0,4)]
for noisy in noise:
df_iris.iloc[noisy] = None
df_iris.head()
.. raw:: html
<embed>
<table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>sepal length (cm)</th> <th>sepal width (cm)</th> <th>petal length (cm)</th> <th>petal width (cm)</th> <th>target</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>5.1</td> <td>NaN</td> <td>1.4</td> <td>0.2</td> <td>None</td> </tr> <tr> <th>1</th> <td>4.9</td> <td>3.0</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>2</th> <td>NaN</td> <td>3.2</td> <td>1.3</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>3</th> <td>4.6</td> <td>3.1</td> <td>1.5</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>4</th> <td>5.0</td> <td>3.6</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> </tbody></table>
</embed>
Clean the DataFrame:
.. code-block:: python
df_iris_cleaned = boltzmannclean.clean(
dataframe=df_iris,
numerical_columns=[
'sepal length (cm)', 'sepal width (cm)',
'petal length (cm)', 'petal width (cm)'
],
categorical_columns=['target'],
tune_rbm=True
)
df_iris_cleaned.round(1).head()
.. raw:: html
<embed>
<table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>sepal length (cm)</th> <th>sepal width (cm)</th> <th>petal length (cm)</th> <th>petal width (cm)</th> <th>target</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>5.1</td> <td>3.3</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>1</th> <td>4.9</td> <td>3.0</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>2</th> <td>6.3</td> <td>3.2</td> <td>1.3</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>3</th> <td>4.6</td> <td>3.1</td> <td>1.5</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>4</th> <td>5.0</td> <td>3.6</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> </tbody></table>
</embed>
The larger and more correlated the dataset is, the better the imputed values will be.
=========================
Fill missing values in a pandas DataFrame using a Restricted Boltzmann Machine.
Provides a class implementing the scikit-learn transformer interface for creating and training a Restricted Boltzmann Machine. This can then be sampled from to fill in missing values in training data or new data of the same format. Utility functions for applying the transformations to a pandas DataFrame are provided, with the option to treat columns as either continuous numerical or categorical features.
Installation
------------
.. code-block:: bash
pip install sherlockml-boltzmannclean
Usage
-----
To fill in missing values from a DataFrame with the minimum of fuss, a cleaning function is provided.
.. code-block:: python
import boltzmannclean
my_clean_dataframe = boltzmannclean.clean(
dataframe=my_dataframe,
numerical_columns=['Height', 'Weight'],
categorical_columns=['Colour', 'Shape'],
tune_rbm=True # tune RBM hyperparameters for my data
)
To create and use the underlying scikit-learn transformer.
.. code-block:: python
my_rbm = boltzmannclean.RestrictedBoltzmannMachine(
n_hidden=100, learn_rate=0.01,
batchsize=10, dropout_fraction=0.5, max_epochs=1,
adagrad=True
)
my_rbm.fit_transform(a_numpy_array)
Here the default RBM hyperparameters are those listed above, and the numpy array operated on is expected to be composed entirely of numbers in the range [0,1] or np.nan/None. The hyperparameters are:
- *n_hidden*: the size of the hidden layer
- *learn_rate*: learning rate for stochastic gradient descent
- *batchsize*: batchsize for stochastic gradient descent
- *dropout_fraction*: fraction of hidden nodes to be dropped out on each backward pass during training
- *max_epochs*: maximum number of passes over the training data
- *adagrad*: whether to use the Adagrad update rules for stochastic gradient descent
Example
-------
.. code-block:: python
import boltzmannclean
import numpy as np
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris()
df_iris = pd.DataFrame(iris.data,columns=iris.feature_names)
df_iris['target'] = pd.Series(iris.target, dtype=str)
df_iris.head()
.. raw:: html
<embed>
<table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>sepal length (cm)</th> <th>sepal width (cm)</th> <th>petal length (cm)</th> <th>petal width (cm)</th> <th>target</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>5.1</td> <td>3.5</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>1</th> <td>4.9</td> <td>3.0</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>2</th> <td>4.7</td> <td>3.2</td> <td>1.3</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>3</th> <td>4.6</td> <td>3.1</td> <td>1.5</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>4</th> <td>5.0</td> <td>3.6</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> </tbody></table>
</embed>
Add some noise:
.. code-block:: python
noise = [(0,1),(2,0),(0,4)]
for noisy in noise:
df_iris.iloc[noisy] = None
df_iris.head()
.. raw:: html
<embed>
<table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>sepal length (cm)</th> <th>sepal width (cm)</th> <th>petal length (cm)</th> <th>petal width (cm)</th> <th>target</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>5.1</td> <td>NaN</td> <td>1.4</td> <td>0.2</td> <td>None</td> </tr> <tr> <th>1</th> <td>4.9</td> <td>3.0</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>2</th> <td>NaN</td> <td>3.2</td> <td>1.3</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>3</th> <td>4.6</td> <td>3.1</td> <td>1.5</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>4</th> <td>5.0</td> <td>3.6</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> </tbody></table>
</embed>
Clean the DataFrame:
.. code-block:: python
df_iris_cleaned = boltzmannclean.clean(
dataframe=df_iris,
numerical_columns=[
'sepal length (cm)', 'sepal width (cm)',
'petal length (cm)', 'petal width (cm)'
],
categorical_columns=['target'],
tune_rbm=True
)
df_iris_cleaned.round(1).head()
.. raw:: html
<embed>
<table border="1" class="dataframe"> <thead> <tr style="text-align: right;"> <th></th> <th>sepal length (cm)</th> <th>sepal width (cm)</th> <th>petal length (cm)</th> <th>petal width (cm)</th> <th>target</th> </tr> </thead> <tbody> <tr> <th>0</th> <td>5.1</td> <td>3.3</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>1</th> <td>4.9</td> <td>3.0</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>2</th> <td>6.3</td> <td>3.2</td> <td>1.3</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>3</th> <td>4.6</td> <td>3.1</td> <td>1.5</td> <td>0.2</td> <td>0</td> </tr> <tr> <th>4</th> <td>5.0</td> <td>3.6</td> <td>1.4</td> <td>0.2</td> <td>0</td> </tr> </tbody></table>
</embed>
The larger and more correlated the dataset is, the better the imputed values will be.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Close
Hashes for sherlockml-boltzmannclean-0.1.0.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | 75f6af442ece51bf9bdb28e59a9f250d8c80f4fd6150678e533d31e4278ddee3 |
|
MD5 | f1df4a39dabf3af9f098d3c6cc1c4bde |
|
BLAKE2b-256 | c271f51e75774a3429c2fecf86ef19b7eb43cb458268a08293183399e80d54f8 |