Skip to main content

A python package for easy data imputation

Project description

datafiller logo

PyPI version Conda version Documentation Status

DataFiller

DataFiller is a Python library for imputing missing values in datasets. It provides a flexible and powerful way to handle missing data in both numerical arrays and time series data.

Why DataFiller

DataFiller is a pragmatic imputation tool: it is unlikely to match the absolute performance of large deep learning approaches on complex masking patterns, but it is much simpler to fit, easier to adapt, and more flexible to plug into existing workflows. It is also significantly faster than scikit-learn's IterativeImputer, which makes it a good choice when you need strong results with tight iteration cycles.

Key Features

Key features include model-based imputation with lightweight models, mixed data support with one-hot encoding and label recovery, a dedicated TimeSeriesImputer with lag/lead features, performance-critical sections accelerated by Numba, smart feature selection for training subsets, and scikit-learn compatibility.

Installation

Install DataFiller using pip or conda:

pip install datafiller
conda install -c conda-forge datafiller

Basic Usage

Imputing a NumPy Array

The MultivariateImputer can be used to fill missing values (NaN) in a 2D NumPy array.

import numpy as np
from datafiller import MultivariateImputer

# Create a matrix with missing values
X = np.array([
    [1.0, 2.0, 3.0, 4.0],
    [5.0, np.nan, 7.0, 8.0],
    [9.0, 10.0, 11.0, np.nan],
    [13.0, 14.0, 15.0, 16.0],
])

# Initialize the imputer and fill the missing values
imputer = MultivariateImputer()
X_imputed = imputer(X)

print("Original Matrix:")
print(X)
print("\nImputed Matrix:")
print(X_imputed)

Imputing a Time Series DataFrame

The TimeSeriesImputer is designed to work with pandas DataFrames that have a DatetimeIndex. It automatically creates autoregressive features (lags and leads) to improve imputation accuracy.

import pandas as pd
import numpy as np
from datafiller import TimeSeriesImputer

# Create a time series DataFrame with missing values
rng = pd.date_range('2023-01-01', periods=10, freq='D')
data = {
    'feature1': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
    'feature2': [10, 9, np.nan, 7, 6, 5, np.nan, 3, 2, 1],
}
df = pd.DataFrame(data, index=rng)

# Initialize the imputer with lags and leads
# Use t-1 and t+1 to impute missing values
ts_imputer = TimeSeriesImputer(lags=[1, -1])
df_imputed = ts_imputer(df)

print("Original DataFrame:")
print(df)
print("\nImputed DataFrame:")
print(df_imputed)

Imputing a Mixed DataFrame with Categorical Features

Categorical columns are one-hot encoded and used as predictors for other columns, while missing categorical values are imputed with a classifier and mapped back to labels.

from datafiller.datasets import load_titanic
from datafiller import MultivariateImputer, ExtremeLearningMachine

df = load_titanic()
imputer = MultivariateImputer(regressor=ExtremeLearningMachine())
df_imputed = imputer(df)

How It Works

DataFiller uses a model-based imputation strategy. For each column containing missing values, it trains a model using the other columns as features. Categorical, boolean, and string columns are one-hot encoded for feature construction, so they can drive the imputation of numerical targets, and are imputed with a classifier before being mapped back to the original labels. The rows used for training are carefully selected to be the largest, most complete rectangular subset of the data, which is found using the optimask algorithm. This ensures that the training data is of the highest possible quality, leading to more accurate imputations.

For more details, see the documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

datafiller-0.2.2.tar.gz (395.9 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

datafiller-0.2.2-py2.py3-none-any.whl (29.0 kB view details)

Uploaded Python 2Python 3

File details

Details for the file datafiller-0.2.2.tar.gz.

File metadata

  • Download URL: datafiller-0.2.2.tar.gz
  • Upload date:
  • Size: 395.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for datafiller-0.2.2.tar.gz
Algorithm Hash digest
SHA256 cbf8487f6059f0967699ec897afd59122d553803f25095adf72db5a1e3c4e8bb
MD5 b750a52c9a841a33a44c5836339bc357
BLAKE2b-256 89abe161e16fc90a646f40385c772cdb98362e5879be9753684941f7eb97e3ff

See more details on using hashes here.

File details

Details for the file datafiller-0.2.2-py2.py3-none-any.whl.

File metadata

  • Download URL: datafiller-0.2.2-py2.py3-none-any.whl
  • Upload date:
  • Size: 29.0 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.14.2

File hashes

Hashes for datafiller-0.2.2-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 480549bbc9e630ad0b9d42c57da0e6986f957c2dadb583f63195c91ea75c44cf
MD5 9f70e8903dd2c1fc89ab1101de7a683b
BLAKE2b-256 325a38b8ec66d70403cc96a4cde4f19c2571856748d90c9ebb90d7fcd42efe68

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page