Skip to main content

HDLSS-focused tabular learning toolkit with distribution-aware preprocessing, portfolio feature selection, and game-theoretic method aggregation.

Reason this release was yanked:

Updates to the publishing process

Project description

Tabnetics

A Python toolkit for high-dimensional, low-sample-size (HDLSS) tabular classification. Tabnetics grew out of the review paper Machine learning on small size samples: A synthetic knowledge synthesis, which provided the library's initial theoretical background for small-sample learning. The library combines distribution-aware preprocessing, portfolio-based feature selection, and game-theoretic method aggregation into a single pipeline designed for settings where p >> n. Its MNPO layer formulates method-portfolio selection as a multiplayer game solved via KL-regularized mirror descent (Freund & Schapire, 1999). The multiplayer Nash framing is inspired by Wu et al., Multiplayer Nash Preference Optimization, but adapted substantially: tabnetics' players are fixed feature-selection methods (not evolving LLM policies), its payoffs come from heterogeneous small-sample oracles (not a shared preference model), and the data regime is HDLSS rather than large-scale.

pip install tabnetics

Licensed under Apache 2.0.

When to use Tabnetics

Tabnetics is built for tabular classification problems where the number of features greatly exceeds the number of samples:

  • Transcriptomics — microarray and RNA-seq gene expression
  • Proteomics and metabolomics — mass-spec feature matrices
  • Other HDLSS settings — any structured tabular problem with p >> n

In these regimes the dominant failure modes are not model selection — they are unstable preprocessing, brittle feature selection, information leakage, and inflated validation estimates. Tabnetics addresses all four.

What Tabnetics adds to the HDLSS problem is not just another selector: it turns many unstable HDLSS choices into a multiplayer portfolio game. Feature-selection methods and classifier candidates are treated as competing players, oracle scores become the payoff structure, and the resulting MNPO equilibrium is used to select a robust portfolio under small-sample constraints.

Usage guide → · Methods & references → · Benchmark results →

Citation

If you use Tabnetics in research, cite the repository for the specific version you used. The library is still under active development, and a companion paper will be published after the current testing and validation cycle is complete.

Repository URL: https://github.com/klokedm/tabnetics-public

@software{kokol_tabnetics_2026,
  author = {Kokol, Marko},
  title = {Tabnetics},
  year = {2026},
  url = {https://github.com/klokedm/tabnetics-public}
}

Benchmark results

Tabnetics has been evaluated on 35 HDLSS benchmark datasets (50–2,600 samples, 500–100,000 features, 2–14 classes) drawn from OpenML, GEO, Scikit-feature, and UCSC Xena/TCGA. Across 2,800+ runs with 9 random seeds per dataset, the pipeline achieves a mean balanced accuracy of 0.80, with 12 of 35 datasets above 0.90 and 3 reaching perfect classification. The MNPO portfolio consistently outperforms single-method baselines, and distribution-aware preprocessing contributes a small but consistent positive effect. Detailed per-dataset results, statistical comparisons, and dataset provenance are available in RESULTS.md. A peer-reviewed article with full methodology and ablation studies is in preparation.

Key ideas

  1. Distribution-aware preprocessing. Each feature is fitted to a parametric family (from 20+ candidates) using goodness-of-fit testing, bootstrap calibration, and L-moment prescreening. CDF-based transforms replace ad-hoc normalization.

  2. Portfolio feature selection. Thirty feature-selection methods — stability selectors, copula knockoffs, tree-based importance, mutual-information filters, IPSS, HSIC-Lasso, and more — are run together. A game-theoretic oracle (MNPO — Nash Multi-Portfolio Optimization) aggregates their outputs into a single robust HDLSS feature portfolio. MNPO builds pairwise preference matrices from multiple oracles (performance, stability, complexity, etc.) and solves for a Nash equilibrium via KL-regularized mirror descent. The multiplayer game framing draws conceptual inspiration from Wu et al.'s Multiplayer Nash Preference Optimization, though the HDLSS adaptation is a distinct contribution with different players, oracles, and data regime (see BACKGROUND.md for details).

  3. Regime-aware classification. An MNPO-based classifier oracle picks from regime-appropriate pools (LR, SVM, LDA, PLS-DA, NSC for extreme HDLSS; plus RF, XGBoost, CatBoost, TabPFN for moderate regimes).

  4. Strict validation. All learned preprocessing and selection is train-only. HuggingFace-hosted datasets are the authoritative source. Synthetic fallback is not allowed for evidence-bearing runs.

Quick start

from tabnetics.pipeline import DistributionFeatureSelectionPipeline, DFFSConfig

config = DFFSConfig(random_seed=42)
pipeline = DistributionFeatureSelectionPipeline(config)

result = pipeline.run(X, y, dataset_name="my_dataset", seed=42)

print(f"Accuracy: {result.accuracy:.3f}")
print(f"Selected features: {result.selected_features}")

Package structure

Subpackage Purpose
tabnetics.core MNPO game-theoretic primitives, sklearn compatibility layer, runtime configuration
tabnetics.distribution Univariate distribution fitting (20+ families), bootstrap GOF, CDF-based transforms
tabnetics.feature_selection 30 selection methods, MNPO portfolio aggregation, copula knockoffs, stability selectors
tabnetics.classification Regime-aware classifier pools, MNPO classifier oracle, PLS-DA, conformal helpers
tabnetics.pipeline End-to-end DF+FS+classification pipeline with leakage prevention
tabnetics.datasets Dataset registry, HuggingFace/OpenML loaders, meta-feature extraction
tabnetics.domains Domain adapters (bioinformatics prefilters, face-domain projection)
tabnetics.multiomics Multi-block PLS-DA (DIABLO-style) and MINT batch-correction integration
tabnetics.benchmarks Benchmark runner, method-set profiles, SOTA comparison, gaming detection
tabnetics.validation Validation campaign planner, shard execution, promotion gates

Feature selection methods

The FeatureSelector supports 30 methods out of the box, including:

Category Methods
Stability selectors Lasso stability, subspace stability, decorrelated stability, cluster stability, TIGRESS
Wrapper methods RFECV (SVM, RF, LR), Boruta
Filter methods ANOVA F-test, mutual information, mRMR, JMI, CMIM, FCBF, Wilcoxon AUC
Tree-based GBDT importance, TreeSHAP, random forest
Knockoff methods Copula knockoff (D-vine, FDR-controlled via e-values), derandomized knockoffs
Embedded OA-Elastic Net, Joint AUC+L1, HSIC-Lasso
Other IPSS, k-TSP, OVA/ECOC wrappers, Rashomon importance

Methods are aggregated via MNPO with configurable oracle presets (minimal, perf_only, perf_complexity, full, etc.).

See BACKGROUND.md for the full list of implemented papers, USING.md for detailed usage, and RESULTS.md for benchmark results.

Installation

Core dependencies (numpy, pandas, scipy, scikit-learn):

pip install tabnetics

With optional feature-selection extras (boruta, copula support, conformal prediction):

pip install tabnetics[feature-selection-optional]

With full benchmark support (FLAML, LightGBM, XGBoost, TabPFN, etc.):

pip install tabnetics[benchmarks]

Requirements

  • Python >= 3.11
  • numpy, pandas, scipy, scikit-learn (core)
  • See pyproject.toml for optional dependency groups

Development

git clone https://github.com/klokedm/tabnetics-public.git
cd tabnetics-public
pip install -e ".[dev]"
pytest

License

Apache 2.0 — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

tabnetics-0.1.2.tar.gz (724.3 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

tabnetics-0.1.2-py3-none-any.whl (595.4 kB view details)

Uploaded Python 3

File details

Details for the file tabnetics-0.1.2.tar.gz.

File metadata

  • Download URL: tabnetics-0.1.2.tar.gz
  • Upload date:
  • Size: 724.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for tabnetics-0.1.2.tar.gz
Algorithm Hash digest
SHA256 5376cdec0d6dd389e75400d3c0c989072054491362923b163bf84f8d8485dd3e
MD5 6a420171e81300f4edc202d6e0c334e6
BLAKE2b-256 0183b2db4ce1334b5a8ff296f57b3ab5d23031661ab24a496c765ba433515024

See more details on using hashes here.

File details

Details for the file tabnetics-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: tabnetics-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 595.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.3

File hashes

Hashes for tabnetics-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 33c019714c182c12c500a0af5c59a359edd9749f022e67da1f44899d0f2a2b5d
MD5 f1154269880424f989e228061091d2fd
BLAKE2b-256 c7d5c91fa5fd8690ed0b3e1e60495184078b438682d8978b59312bf43766bcdd

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page