Generating and Imputing Tabular Data via Diffusion and Flow XGBoost Models
Project description
Tabular data is hard to acquire and is subject to missing values. This paper proposes a novel approach to generate and impute mixed-type (continuous and categorical) tabular data using score-based diffusion and conditional flow matching. Contrary to previous work that relies on neural networks as function approximators, we instead utilize XGBoost, a popular Gradient-Boosted Tree (GBT) method. In addition to being elegant, we empirically show on various datasets that our method i) generates highly realistic synthetic data when the training dataset is either clean or tainted by missing data and ii) generates diverse plausible data imputations. Our method often outperforms deep-learning generation methods and can trained in parallel using CPUs without the need for a GPU. To make it easily accessible, we release our code through a Python library and an R package <arXiv:2309.09968>.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file ForestDiffusion-1.0.6-py3-none-any.whl
.
File metadata
- Download URL: ForestDiffusion-1.0.6-py3-none-any.whl
- Upload date:
- Size: 14.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.0 CPython/3.8.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 4a7437381bdca4cdc556945a2c82ee18fe85bdc8ed7eee0a67e361ee7e21c9df |
|
MD5 | 30e68ee2d173febfca51b4dcb5442de5 |
|
BLAKE2b-256 | 267e448bbe1eb6d74ef2cafe5a26ba7648f7d4e1ba13614cd3ebc4badde71c5a |