Skip to main content

TabICL: A Tabular Foundation Model for In-Context Learning on Large Data

Project description

test

TabICL: A Tabular Foundation Model for In-Context Learning on Large Data

TabICL is a tabular foundation model like TabPFN. Currently, TabICL is only for classification tasks.

Architecture

TabICL processes tabular data through three sequential stages:

  1. Column-wise Embedding: Creates distribution-aware embeddings for each feature
  2. Row-wise Interaction: Captures interactions between features within each row
  3. Dataset-wise In-Context Learning: Learns patterns from labeled examples to make predictions
The architecture of TabICL

Installation

pip install tabicl

Usage

Basic Usage

from tabicl import TabICLClassifier

clf = TabICLClassifier()
clf.fit(X_train, y_train)  # this is cheap
clf.predict(y_test)  # in-context learning happens here

The code above will automatically download the pre-trained checkpoint (~100MB) from Hugging Face Hub on first use and choose a GPU if available.

Advanced Configuration

TabICL offers a set of parameters to customize its behavior. The following example shows all available parameters with their default values and brief descriptions:

from tabicl import TabICLClassifier

clf = TabICLClassifier(
  n_estimators=32,                  # number of ensemble members
  norm_methods=["none", "power"],   # normalization methods to try
  feat_shuffle_method="latin",      # feature permutation strategy
  class_shift=True,                 # whether to apply cyclic shifts to class labels
  outlier_threshold=4.0,            # z-score threshold for outlier detection and clipping
  softmax_temperature=0.9,          # controls prediction confidence
  average_logits=True,              # whether ensemble averaging is done on logits or probabilities
  use_hierarchical=True,            # enable hierarchical classification for datasets with many classe
  batch_size=8,                     # process this many ensemble members together (reduce RAM usage)
  use_amp=True,                     # use automatic mixed precision for faster inference
  model_path=None,                  # where the model checkpoint is stored
  allow_auto_download=True,         # whether automatic download to the specified path is allowed
  device='cpu',                     # specify device for inference
  random_state=42,                  # random seed for reproducibility
  verbose=False                     # print detailed information during inference
)

Memory-Efficient Inference

TabICL includes memory management to handle large datasets:

  • Memory Profiling: Built-in memory estimators for different components of the model
  • Batch Size Estimation: Dynamically determines optimal batch sizes based on available GPU memory
  • CPU Offloading: Automatically offloads intermediate results to CPU when beneficial
  • OOM Recovery: Recovers gracefully from out-of-memory errors by reducing batch size

Preprocessing

If the input X to TabICL is a pandas DataFrame, TabICL will automatically:

  • Detect and ordinal encode categorical columns (including string, object, category, and boolean types)
  • Create a separate category for missing values in categorical features
  • Perform mean imputation for missing numerical values (encoded as NaN)

If the input X is a numpy array, TabICL assumes that ordinal encoding and missing value imputation have already been performed.

For both input types, TabICL applies additional preprocessing:

  • Outlier detection and removal
  • Feature scaling and normalization
  • Feature shuffling for ensemble diversity

Key Features and Considerations:

  • Number of samples:
    • TabICL is pretrained on datasets with up to 60K samples.
    • TabICL can handle datasets beyond 100K samples thanks to memory-efficient inference.
    • TabPFN (v2) is on average better than TabICL on small datasets with <10K samples, while TabICL is better on larger datasets.
    • Classical methods may catch up with TabICL at around 40K samples but they are much slower due to extensive hyperparameter tuning.
Ranking vs. number of samples
  • Number of features:

    • TabICL is pretrained on datasets with up to 100 features.
    • TabICL can accommodate any number of features theoretically.
  • Number of classes:

    • TabICL is pretrained on datasets with up to 10 classes, so it natively supports a maximum of 10 classes.
    • However, TabICL can handle any number of classes thanks to its in-built hierarchical classification.
  • Inference speed:

    • Like TabPFN, fit() does minimal work while predict() runs the full model
    • At the same n_estimators, TabICL is usually 1x-5x faster than TabPFN
    • TabICL benefits more from larger n_estimators, hence the default of 32
    • Automatic mixed precision (AMP) provides further speed improvements on compatible GPUs
  • No tuning required: TabICL produces good predictions without hyperparameter tuning, unlike classical methods that require extensive tuning for optimal performance.

Performance

TabICL has achieved excellent results on the TALENT benchmark.

Performance on TALENT

Code Availability

This repository currently only contains the inference code for TabICL. The pretraining code will probably be released in the future.

Citation

If you use TabICL for research purposes, please cite our paper:

@article{qu2025tabicl,
  title={TabICL: A Tabular Foundation Model for In-Context Learning on Large Data},
  author={Qu, Jingang and Holzm{\"u}ller, David and Varoquaux, Ga{\"e}l and Morvan, Marine Le},
  journal={arXiv preprint arXiv:2502.05564},
  year={2025}
}

Contributors

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tabicl-0.0.2.tar.gz (979.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tabicl-0.0.2-py3-none-any.whl (49.0 kB view details)

Uploaded Python 3

File details

Details for the file tabicl-0.0.2.tar.gz.

File metadata

  • Download URL: tabicl-0.0.2.tar.gz
  • Upload date:
  • Size: 979.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.21

File hashes

Hashes for tabicl-0.0.2.tar.gz
Algorithm Hash digest
SHA256 90265e0afa9cb4884fa2951239e627ae6d04c26d209e57152e02d058e3f33d9b
MD5 65750e37c8d15694059489f5e768cd62
BLAKE2b-256 4a45b27ef91c7eeb47d5e49d06b18a16580192ddb4693178b553b68491acdafd

See more details on using hashes here.

File details

Details for the file tabicl-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: tabicl-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 49.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.1.0 CPython/3.9.21

File hashes

Hashes for tabicl-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 390f13a788cebb682e4d6bd1504d233e3fdedcd52eb75b99fedc7086bf338759
MD5 115bbeae5b2c2e3dd9c91781112c8399
BLAKE2b-256 831953b9ebf6500b22056af1c3e175b8e84cb2fec27b96a5e4e786fd8eedd7b0

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page