Skip to main content

A python package for easy data imputation

Project description

datafiller logo

PyPI version Conda version Documentation Status

DataFiller

DataFiller is a Python library for imputing missing values in datasets. It provides a flexible and powerful way to handle missing data in both numerical arrays and time series data.

Why DataFiller

DataFiller is a pragmatic imputation tool: it is unlikely to match the absolute performance of large deep learning approaches on complex masking patterns, but it is much simpler to fit, easier to adapt, and more flexible to plug into existing workflows. It is also significantly faster than scikit-learn's IterativeImputer, which makes it a good choice when you need strong results with tight iteration cycles.

Key Features

Key features include model-based imputation with lightweight models, mixed data support with one-hot encoding and label recovery, a dedicated TimeSeriesImputer with lag/lead features, performance-critical sections accelerated by Numba, smart feature selection for training subsets, and scikit-learn compatibility.

Installation

Install DataFiller using pip or conda:

pip install datafiller
conda install -c conda-forge datafiller

Basic Usage

Imputing a NumPy Array

The MultivariateImputer can be used to fill missing values (NaN) in a 2D NumPy array.

import numpy as np
from datafiller import MultivariateImputer

# Create a matrix with missing values
X = np.array([
    [1.0, 2.0, 3.0, 4.0],
    [5.0, np.nan, 7.0, 8.0],
    [9.0, 10.0, 11.0, np.nan],
    [13.0, 14.0, 15.0, 16.0],
])

# Initialize the imputer and fill the missing values
imputer = MultivariateImputer()
X_imputed = imputer(X)

print("Original Matrix:")
print(X)
print("\nImputed Matrix:")
print(X_imputed)

Imputing a Time Series DataFrame

The TimeSeriesImputer is designed to work with pandas DataFrames that have a DatetimeIndex. It automatically creates autoregressive features (lags and leads) to improve imputation accuracy.

import pandas as pd
import numpy as np
from datafiller import TimeSeriesImputer

# Create a time series DataFrame with missing values
rng = pd.date_range('2023-01-01', periods=10, freq='D')
data = {
    'feature1': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
    'feature2': [10, 9, np.nan, 7, 6, 5, np.nan, 3, 2, 1],
}
df = pd.DataFrame(data, index=rng)

# Initialize the imputer with lags and leads
# Use t-1 and t+1 to impute missing values
ts_imputer = TimeSeriesImputer(lags=[1, -1])
df_imputed = ts_imputer(df)

print("Original DataFrame:")
print(df)
print("\nImputed DataFrame:")
print(df_imputed)

Imputing a Mixed DataFrame with Categorical Features

Categorical columns are one-hot encoded and used as predictors for other columns, while missing categorical values are imputed with a classifier and mapped back to labels.

from datafiller.datasets import load_titanic
from datafiller import MultivariateImputer, ExtremeLearningMachine

df = load_titanic()
imputer = MultivariateImputer(regressor=ExtremeLearningMachine())
df_imputed = imputer(df)

How It Works

DataFiller uses a model-based imputation strategy. For each column containing missing values, it trains a model using the other columns as features. Categorical, boolean, and string columns are one-hot encoded for feature construction, so they can drive the imputation of numerical targets, and are imputed with a classifier before being mapped back to the original labels. The rows used for training are carefully selected to be the largest, most complete rectangular subset of the data, which is found using the optimask algorithm. This ensures that the training data is of the highest possible quality, leading to more accurate imputations.

For more details, see the documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datafiller-0.2.3.tar.gz (396.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

datafiller-0.2.3-py2.py3-none-any.whl (29.0 kB view details)

Uploaded Python 2Python 3

File details

Details for the file datafiller-0.2.3.tar.gz.

File metadata

  • Download URL: datafiller-0.2.3.tar.gz
  • Upload date:
  • Size: 396.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for datafiller-0.2.3.tar.gz
Algorithm Hash digest
SHA256 377ea0db5905f0644f0ceeb1ce6a6cabb9c55e3a7e2cc75463bcea21c9ce0faa
MD5 54bd3e2f9f7b77a29dd4ef7c83949bd0
BLAKE2b-256 bba82f932de26714e221289bc2ed48eae7bb9c54420a8c10b82cfb3f78c6d2fa

See more details on using hashes here.

File details

Details for the file datafiller-0.2.3-py2.py3-none-any.whl.

File metadata

  • Download URL: datafiller-0.2.3-py2.py3-none-any.whl
  • Upload date:
  • Size: 29.0 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.3

File hashes

Hashes for datafiller-0.2.3-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 14a79865985d1d11dd70ed089f05816eb5ce313bf86d30dc13c3dd66e0b195a3
MD5 976c98f550832be34332a19404fdfd74
BLAKE2b-256 a39396ec52d11369758d0fd48239a89138bee736cba6b3e70938fd52c9a1dcf8

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page