Skip to main content

A python package for easy data imputation

Project description

datafiller logo

PyPI version Conda version Documentation Status

DataFiller

DataFiller is a Python library for imputing missing values in datasets. It provides a flexible and powerful way to handle missing data in both numerical arrays and time series data.

Why DataFiller

DataFiller is a pragmatic imputation tool: it is unlikely to match the absolute performance of large deep learning approaches on complex masking patterns, but it is much simpler to fit, easier to adapt, and more flexible to plug into existing workflows. It is also significantly faster than scikit-learn's IterativeImputer, which makes it a good choice when you need strong results with tight iteration cycles.

Key Features

Key features include model-based imputation with lightweight models, mixed data support with one-hot encoding and label recovery, a dedicated TimeSeriesImputer with lag/lead features, performance-critical sections accelerated by Numba, smart feature selection for training subsets, and scikit-learn compatibility.

Installation

Install DataFiller using pip or conda:

pip install datafiller
conda install -c conda-forge datafiller

Basic Usage

Imputing a NumPy Array

The MultivariateImputer can be used to fill missing values (NaN) in a 2D NumPy array.

import numpy as np
from datafiller import MultivariateImputer

# Create a matrix with missing values
X = np.array([
    [1.0, 2.0, 3.0, 4.0],
    [5.0, np.nan, 7.0, 8.0],
    [9.0, 10.0, 11.0, np.nan],
    [13.0, 14.0, 15.0, 16.0],
])

# Initialize the imputer and fill the missing values
imputer = MultivariateImputer()
X_imputed = imputer(X)

print("Original Matrix:")
print(X)
print("\nImputed Matrix:")
print(X_imputed)

Imputing a Time Series DataFrame

The TimeSeriesImputer is designed to work with pandas DataFrames that have a DatetimeIndex. It automatically creates autoregressive features (lags and leads) to improve imputation accuracy.

import pandas as pd
import numpy as np
from datafiller import TimeSeriesImputer

# Create a time series DataFrame with missing values
rng = pd.date_range('2023-01-01', periods=10, freq='D')
data = {
    'feature1': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
    'feature2': [10, 9, np.nan, 7, 6, 5, np.nan, 3, 2, 1],
}
df = pd.DataFrame(data, index=rng)

# Initialize the imputer with lags and leads
# Use t-1 and t+1 to impute missing values
ts_imputer = TimeSeriesImputer(lags=[1, -1])
df_imputed = ts_imputer(df)

print("Original DataFrame:")
print(df)
print("\nImputed DataFrame:")
print(df_imputed)

Imputing a Mixed DataFrame with Categorical Features

Categorical columns are one-hot encoded and used as predictors for other columns, while missing categorical values are imputed with a classifier and mapped back to labels.

from datafiller.datasets import load_titanic
from datafiller import MultivariateImputer, ExtremeLearningMachine

df = load_titanic()
imputer = MultivariateImputer(regressor=ExtremeLearningMachine())
df_imputed = imputer(df)

How It Works

DataFiller uses a model-based imputation strategy. For each column containing missing values, it trains a model using the other columns as features. Categorical, boolean, and string columns are one-hot encoded for feature construction, so they can drive the imputation of numerical targets, and are imputed with a classifier before being mapped back to the original labels. The rows used for training are carefully selected to be the largest, most complete rectangular subset of the data, which is found using the optimask algorithm. This ensures that the training data is of the highest possible quality, leading to more accurate imputations.

For more details, see the documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datafiller-0.2.4.tar.gz (397.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

datafiller-0.2.4-py2.py3-none-any.whl (29.3 kB view details)

Uploaded Python 2Python 3

File details

Details for the file datafiller-0.2.4.tar.gz.

File metadata

  • Download URL: datafiller-0.2.4.tar.gz
  • Upload date:
  • Size: 397.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for datafiller-0.2.4.tar.gz
Algorithm Hash digest
SHA256 73be1141681852b786040ebabc950565fcfd1dfc08607f50bbcaebe02bf99cde
MD5 d957da25256d9bc700c337f46e1c4477
BLAKE2b-256 7b39ed9ee622aefd076a24166b2ddea5fa9112b7898e5fb41a3aacad4e9f6eaf

See more details on using hashes here.

File details

Details for the file datafiller-0.2.4-py2.py3-none-any.whl.

File metadata

  • Download URL: datafiller-0.2.4-py2.py3-none-any.whl
  • Upload date:
  • Size: 29.3 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for datafiller-0.2.4-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 c9e57e59325ac1208f2e2cf507a409fb764d295a45b9bf075c3a865da5fd3feb
MD5 d09d0804411e242f6071da5df5017d4c
BLAKE2b-256 041d9a81a6f4cf609c84feaeb2be4d1bc19731a6d5d6cdb063fbdd8dac35e41f

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page