Skip to main content

Generating Realistic Tabular Data using Large Language Models

Project description

GReaT Logo

Generation of Realistic Tabular data

with pretrained Transformer-based language models

PyPI Downloads Discord Open In Colab

Our GReaT framework leverages the power of advanced pretrained Transformer language models to produce high-quality synthetic tabular data. Generate new data samples effortlessly with our user-friendly API in just a few lines of code. Please see our publication for more details.

GReaT framework has also been adopted in practice on Google’s Kaggle platform, where it has been used to generate synthetic datasets across multiple competitions.

 

GReaT Installation

The GReaT framework can be easily installed using with pip - requires a Python version >= 3.9:

pip install be-great

GReaT Quickstart

In the example below, we show how the GReaT approach is used to generate synthetic tabular data for the California Housing dataset.

from be_great import GReaT
from sklearn.datasets import fetch_california_housing

data = fetch_california_housing(as_frame=True).frame

model = GReaT(llm='tabularisai/Qwen3-0.3B-distil', batch_size=32,  epochs=5,
              fp16=True, dataloader_num_workers=4)
model.fit(data)
synthetic_data = model.sample(n_samples=100)

Imputing a sample

GReaT also features an interface to impute, i.e., fill in, missing values in arbitrary combinations. This requires a trained model, for instance one obtained using the code snippet above, and a pd.DataFrame where missing values are set to NaN. A minimal example is provided below:

# test_data: pd.DataFrame with samples from the distribution
# model: GReaT trained on the data distribution that should be imputed

# Drop values randomly from test_data
import numpy as np
for clm in test_data.columns:
    test_data[clm]=test_data[clm].apply(lambda x: (x if np.random.rand() > 0.5 else np.nan))

imputed_data = model.impute(test_data, max_length=200)

Saving and Loading

GReaT provides methods for saving a model checkpoint (besides the checkpoints stored by the huggingface transformers Trainer) and loading the checkpoint again.

model = GReaT(llm='tabularisai/Qwen3-0.3B-distil', batch_size=32,  epochs=5, fp16=True)
model.fit(data)
model.save("my_directory")  # saves a "model.pt" and a "config.json" file
model = GReaT.load_from_dir("my_directory")  # loads the model again

# supports remote file systems via fsspec
model.save("s3://my_bucket")
model = GReaT.load_from_dir("s3://my_bucket")

Optimizing GReaT for Challenging Datasets

When working with small datasets or datasets with many features, GReaT offers specialized parameters to improve generation quality:

# For small datasets or datasets with many features
model = GReaT(
    llm='tabularisai/Qwen3-0.3B-distil',  
    float_precision=3,  # Limit floating-point precision to 3 decimal places
    batch_size=8,       # Use smaller batch size for small datasets
    epochs=100,         # Train for more epochs with small data
    fp16=True           # Enable half-precision training for faster computation and lower memory usage
)
model.fit(data)

# Use guided sampling for higher quality generation with complex feature sets
synthetic_data = model.sample(
    n_samples=100,
    guided_sampling=True,     # Enable feature-by-feature guided generation
    random_feature_order=True,  # Randomize feature order to avoid bias
    temperature=0.7           # Control diversity of generated values, use lower temperature for challenging data
)

The guided_sampling=True parameter enables a feature-by-feature generation approach, which can produce more reliable results for datasets with many features or complex relationships. While potentially slower than the default sampling method, it can help overcome generation challenges with difficult datasets.

The float_precision parameter limits decimal places in numerical values, which can help the model focus on significant patterns rather than memorizing exact values. This is particularly helpful for small datasets where overfitting is a concern.

Conditional Synthetic Data Generation

GReaT supports constrained sampling with logical operators — generate synthetic tabular data that satisfies conditions like age >= 30 or city != 'New York'. Constraints are enforced during token generation, so every output row is valid with zero waste.

from be_great import GReaT
from ucimlrepo import fetch_ucirepo

# Load the UCI Adult (Census Income) dataset
adult = fetch_ucirepo(id=2)
df = adult.data.features[["age", "workclass", "education", "sex", "hours-per-week"]].copy()
df["income"] = adult.data.targets["income"]
df = df[~df.isin(["?"]).any(axis=1)].dropna()

model = GReaT(llm='distilgpt2', epochs=50, batch_size=32, float_precision=0)
model.fit(df)

# Generate synthetic data with constraints
synthetic_data = model.sample(
    n_samples=100,
    conditions={
        "age": ">= 40",
        "hours-per-week": "<= 40",
        "sex": "!= 'Male'",
    },
)

Supported operators for numeric columns: >=, <=, >, <, ==, !=. For categorical columns: ==, != (quote values with single quotes, e.g. "== 'Female'"). Multiple conditions can be combined in a single call. Guided sampling is enabled automatically when conditions are provided.

Efficient Fine-Tuning with LoRA

GReaT supports LoRA (Low-Rank Adaptation) for parameter-efficient fine-tuning. This drastically reduces memory usage and training time, making it possible to fine-tune larger models on consumer hardware.

pip install peft
# LoRA with auto-detected target modules (works across model architectures)
model = GReaT(
    llm='meta-llama/Llama-3.1-8B-Instruct',
    batch_size=32,
    epochs=5,
    efficient_finetuning="lora",
    fp16=True,
)
model.fit(data)
synthetic_data = model.sample(n_samples=100)

You can also customize the LoRA hyperparameters:

model = GReaT(
    llm='tabularisai/Qwen3-0.3B-distil',
    batch_size=32,
    epochs=5,
    efficient_finetuning="lora",
    lora_config={
        "r": 8,
        "lora_alpha": 16,
        "lora_dropout": 0.1,
        "target_modules": ["q_proj", "v_proj"],  # optional, auto-detected if omitted
    },
    fp16=True,
)
model.fit(data)

GReaT Metrics

GReaT ships with a built-in evaluation suite to measure the quality, utility, and privacy of your synthetic data. All metrics follow the same interface:

from be_great.metrics import ColumnShapes, DiscriminatorMetric, MLEfficiency, DistanceToClosestRecord

# real_data: original pd.DataFrame
# synthetic_data: generated pd.DataFrame from model.sample()

ColumnShapes().compute(real_data, synthetic_data)
DiscriminatorMetric().compute(real_data, synthetic_data)
MLEfficiency(model=RandomForestClassifier, metric=accuracy_score,
             model_params={"n_estimators": 100}).compute(
    real_data, synthetic_data, label_col="target"
)
DistanceToClosestRecord().compute(real_data, synthetic_data)

Statistical Metrics

Metric What it measures
ColumnShapes Per-column distribution similarity (KS test for numerical, TVD for categorical)
ColumnPairTrends Preservation of pairwise correlations (Pearson and Cramer's V)
BasicStatistics Comparison of mean, std, and median per column

Fidelity & Utility Metrics

Metric What it measures
DiscriminatorMetric Trains a classifier to distinguish real from synthetic — score near 0.5 is best
MLEfficiency Trains on synthetic, tests on real — measures downstream task utility

Privacy Metrics

Metric What it measures
DistanceToClosestRecord Distance from each synthetic record to its nearest real neighbor
kAnonymization Minimum equivalence class size (higher = better privacy)
lDiversity Diversity of sensitive attribute values within groups
IdentifiabilityScore Risk of linking a synthetic record back to a specific real individual
DeltaPresence Fraction of real records that have a near-exact synthetic match
MembershipInference Simulated attack: can an adversary detect training set members?

GReaT Citation

If you use GReaT, please link or cite our work:

@inproceedings{borisov2023language,
  title={Language Models are Realistic Tabular Data Generators},
  author={Vadim Borisov and Kathrin Sessler and Tobias Leemann and Martin Pawelczyk and Gjergji Kasneci},
  booktitle={The Eleventh International Conference on Learning Representations },
  year={2023},
  url={https://openreview.net/forum?id=cEygmQNOeI}
}

Custom Synthetic Data

Need synthetic data for your business? We can help! Contact us at info@tabularis.ai for custom data generation services.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

be_great-0.0.13.tar.gz (42.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

be_great-0.0.13-py3-none-any.whl (43.0 kB view details)

Uploaded Python 3

File details

Details for the file be_great-0.0.13.tar.gz.

File metadata

  • Download URL: be_great-0.0.13.tar.gz
  • Upload date:
  • Size: 42.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for be_great-0.0.13.tar.gz
Algorithm Hash digest
SHA256 fb48d161515d837a04c9a12e537ca834dfc12e93937233d82c635c104c263fb9
MD5 8210e765e4e33cd23596f21e03592a05
BLAKE2b-256 227ddd5bad05351fbc9415eed607c23725c0d54ba87cb06c7f8974ce33e68341

See more details on using hashes here.

File details

Details for the file be_great-0.0.13-py3-none-any.whl.

File metadata

  • Download URL: be_great-0.0.13-py3-none-any.whl
  • Upload date:
  • Size: 43.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for be_great-0.0.13-py3-none-any.whl
Algorithm Hash digest
SHA256 7429a137c39b1c70d976a24be179934ce8a5ff5f860625f387d54ce9e5f070d6
MD5 bebcbdcc1acd632feac949d61d5f3a7d
BLAKE2b-256 d850241da02a185ff98d4e3fb3d66a5303e912008761f930c4db7e8e795c220c

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page