HDLSS-focused tabular learning toolkit with distribution-aware preprocessing, portfolio feature selection, and game-theoretic method aggregation.
Reason this release was yanked:
Updates to the publishing process
Project description
Tabnetics
A Python toolkit for high-dimensional, low-sample-size (HDLSS) tabular classification. Tabnetics grew out of the review paper Machine learning on small size samples: A synthetic knowledge synthesis, which provided the library's initial theoretical background for small-sample learning. The library combines distribution-aware preprocessing, portfolio-based feature selection, and game-theoretic method aggregation into a single pipeline designed for settings where p >> n.
pip install tabnetics
Licensed under Apache 2.0.
When to use Tabnetics
Tabnetics is built for tabular classification problems where the number of features greatly exceeds the number of samples:
- Transcriptomics — microarray and RNA-seq gene expression
- Proteomics and metabolomics — mass-spec feature matrices
- Other HDLSS settings — any structured tabular problem with
p >> n
In these regimes the dominant failure modes are not model selection — they are unstable preprocessing, brittle feature selection, information leakage, and inflated validation estimates. Tabnetics addresses all four.
Usage guide → · Methods & references → · Benchmark results →
Citation
If you use Tabnetics in research, cite the repository for the specific version you used. The library is still under active development, and a companion paper will be published after the current testing and validation cycle is complete.
Repository URL: https://github.com/klokedm/tabnetics-public
@software{kokol_tabnetics_2026,
author = {Kokol, Marko},
title = {Tabnetics},
year = {2026},
url = {https://github.com/klokedm/tabnetics-public}
}
Benchmark results
Tabnetics has been evaluated on 35 HDLSS benchmark datasets (50–2,600 samples, 500–100,000 features, 2–14 classes) drawn from OpenML, GEO, Scikit-feature, and UCSC Xena/TCGA. Across 2,800+ runs with 9 random seeds per dataset, the pipeline achieves a mean balanced accuracy of 0.80, with 12 of 35 datasets above 0.90 and 3 reaching perfect classification. The MNPO portfolio consistently outperforms single-method baselines, and distribution-aware preprocessing contributes a small but consistent positive effect. Detailed per-dataset results, statistical comparisons, and dataset provenance are available in RESULTS.md. A peer-reviewed article with full methodology and ablation studies is in preparation.
Key ideas
-
Distribution-aware preprocessing. Each feature is fitted to a parametric family (from 20+ candidates) using goodness-of-fit testing, bootstrap calibration, and L-moment prescreening. CDF-based transforms replace ad-hoc normalization.
-
Portfolio feature selection. Thirty feature-selection methods — stability selectors, copula knockoffs, tree-based importance, mutual-information filters, IPSS, HSIC-Lasso, and more — are run together. A game-theoretic oracle (MNPO — Nash Multi-Portfolio Optimization) aggregates their outputs into a single robust feature set.
-
Regime-aware classification. An MNPO-based classifier oracle picks from regime-appropriate pools (LR, SVM, LDA, PLS-DA, NSC for extreme HDLSS; plus RF, XGBoost, CatBoost, TabPFN for moderate regimes).
-
Strict validation. All learned preprocessing and selection is train-only. HuggingFace-hosted datasets are the authoritative source. Synthetic fallback is not allowed for evidence-bearing runs.
Quick start
from tabnetics.pipeline import DistributionFeatureSelectionPipeline, DFFSConfig
config = DFFSConfig(random_seed=42)
pipeline = DistributionFeatureSelectionPipeline(config)
result = pipeline.run(X, y, dataset_name="my_dataset", seed=42)
print(f"Accuracy: {result.accuracy:.3f}")
print(f"Selected features: {result.selected_features}")
Package structure
| Subpackage | Purpose |
|---|---|
tabnetics.core |
MNPO game-theoretic primitives, sklearn compatibility layer, runtime configuration |
tabnetics.distribution |
Univariate distribution fitting (20+ families), bootstrap GOF, CDF-based transforms |
tabnetics.feature_selection |
30 selection methods, MNPO portfolio aggregation, copula knockoffs, stability selectors |
tabnetics.classification |
Regime-aware classifier pools, MNPO classifier oracle, PLS-DA, conformal helpers |
tabnetics.pipeline |
End-to-end DF+FS+classification pipeline with leakage prevention |
tabnetics.datasets |
Dataset registry, HuggingFace/OpenML loaders, meta-feature extraction |
tabnetics.domains |
Domain adapters (bioinformatics prefilters, face-domain projection) |
tabnetics.multiomics |
Multi-block PLS-DA (DIABLO-style) and MINT batch-correction integration |
tabnetics.benchmarks |
Benchmark runner, method-set profiles, SOTA comparison, gaming detection |
tabnetics.validation |
Validation campaign planner, shard execution, promotion gates |
Feature selection methods
The FeatureSelector supports 30 methods out of the box, including:
| Category | Methods |
|---|---|
| Stability selectors | Lasso stability, subspace stability, decorrelated stability, cluster stability, TIGRESS |
| Wrapper methods | RFECV (SVM, RF, LR), Boruta |
| Filter methods | ANOVA F-test, mutual information, mRMR, JMI, CMIM, FCBF, Wilcoxon AUC |
| Tree-based | GBDT importance, TreeSHAP, random forest |
| Knockoff methods | Copula knockoff (D-vine, FDR-controlled via e-values), derandomized knockoffs |
| Embedded | OA-Elastic Net, Joint AUC+L1, HSIC-Lasso |
| Other | IPSS, k-TSP, OVA/ECOC wrappers, Rashomon importance |
Methods are aggregated via MNPO with configurable oracle presets (minimal, perf_only, perf_complexity, full, etc.).
See BACKGROUND.md for the full list of implemented papers, USING.md for detailed usage, and RESULTS.md for benchmark results.
Installation
Core dependencies (numpy, pandas, scipy, scikit-learn):
pip install tabnetics
With optional feature-selection extras (boruta, copula support, conformal prediction):
pip install tabnetics[feature-selection-optional]
With full benchmark support (FLAML, LightGBM, XGBoost, TabPFN, etc.):
pip install tabnetics[benchmarks]
Requirements
- Python >= 3.11
- numpy, pandas, scipy, scikit-learn (core)
- See
pyproject.tomlfor optional dependency groups
Development
git clone https://github.com/klokedm/tabnetics-public.git
cd tabnetics-public
pip install -e ".[dev]"
pytest
License
Apache 2.0 — see LICENSE.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file tabnetics-0.1.1.tar.gz.
File metadata
- Download URL: tabnetics-0.1.1.tar.gz
- Upload date:
- Size: 723.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
6efc82ad245665f4af7227dd9b215f0352d2b9852ff2839e5c37d9fb03877914
|
|
| MD5 |
25fd1f2c40ca8aceaab1adad292b4db6
|
|
| BLAKE2b-256 |
3c71b2e9a7328798b0a412e90b5192d4a3da49eb897b7a2610fe77082d561e2d
|
File details
Details for the file tabnetics-0.1.1-py3-none-any.whl.
File metadata
- Download URL: tabnetics-0.1.1-py3-none-any.whl
- Upload date:
- Size: 594.8 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.3
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
46f773c343a21c86f67f8bea81d20026973842ee2ba5abb24c45fd6ceed93348
|
|
| MD5 |
da76310d61e91a3a429a0c56283d1b9e
|
|
| BLAKE2b-256 |
d4bc704e67a0db7082db9570e02ac1d27d0e0e4ea01120810728f25789906a4b
|