Add your description here
Project description
histotuner
Supported token-extraction backends
histotuner can append multiple model-specific token tables into the same
SpatialData Zarr while keeping shared geometry layers model-agnostic.
Currently supported token extractors:
hf-hub:bioptimus/H-optimus-1hf-hub:MahmoodLab/UNI2-hhf-hub:paige-ai/Virchow2hf-hub:Wangyh/mSTARhf-hub:prov-gigapath/prov-gigapathowkin/phikon-v2MahmoodLab/conchv1_5WenchuanZhang/Patho-CLIP-Lmajiabo/GPFM
Token-grid semantics
All currently supported models export a unified 14x14 token grid so token
tables can be compared directly across models.
phikon-v2exports a native14x14patch-token grid.hf-hub:bioptimus/H-optimus-1,hf-hub:Wangyh/mSTAR, andhf-hub:prov-gigapath/prov-gigapathexport native14x14grids.hf-hub:MahmoodLab/UNI2-handhf-hub:paige-ai/Virchow2have native16x16patch-token grids after special tokens are stripped, andhistotuneradaptively average-pools them to14x14.conchv1_5is special:- the native vision encoder runs at
448x448withpatch16 - that produces a native
28x28patch-token grid histotuneraverage-pools each non-overlapping2x2token neighborhood to export a compatibility14x14token grid
- the native vision encoder runs at
Patho-CLIP-Lis also special:- the native CLIP-L/14 vision encoder produces a
24x24patch-token grid at336x336input resolution histotuneradaptively average-pools that native24x24grid to export a compatibility14x14token grid
- the native CLIP-L/14 vision encoder produces a
GPFMis also special:- the native DINOv2 ViT-L/14 encoder produces a
16x16patch-token grid at224x224input resolution histotuneradaptively average-pools that native16x16grid to export a compatibility14x14token grid
- the native DINOv2 ViT-L/14 encoder produces a
That pooling choice is deliberate so downstream single-cell workflows can
consume every supported model through the same 14x14 token layout. For the
pooled models, this is a compatibility semantic rather than the model's native
tokenization:
UNI2-handVirchow2: pooled from native16x16conchv1_5: pooled from native28x28Patho-CLIP-L: pooled from native24x24GPFM: pooled from native16x16
Not yet supported for token extraction
- none from the current requested set
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file histotuner-0.2.6.tar.gz.
File metadata
- Download URL: histotuner-0.2.6.tar.gz
- Upload date:
- Size: 128.1 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
abcb0999e0811b49b564facf54da7393ef46b1138dbdf3a0c2aa4aab14fdfa7a
|
|
| MD5 |
9cc30053e279da6649edc372b73fabde
|
|
| BLAKE2b-256 |
d535cbf5e978ae832eb1628367c7317138732649872322d2222e36d3d4e592ad
|
File details
Details for the file histotuner-0.2.6-py3-none-any.whl.
File metadata
- Download URL: histotuner-0.2.6-py3-none-any.whl
- Upload date:
- Size: 138.5 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: uv/0.8.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
b285ee0b641f4e8ab68fb75ac0b1e60d76f13aa3c593d2bcfeb8d3552fda0eae
|
|
| MD5 |
cea9e6d2235517b27a0c8f8f000c73e1
|
|
| BLAKE2b-256 |
7783f6e35907ec2a10c7a861e4b205b2df47c4b91a9be32b95231310241a4787
|