Skip to main content

SPECTRE: cross-modal self-supervised pretraining for CT representation extraction

Project description

📢 [2026-02-21] SPECTRE has been accepted for presentation at CVPR 2026 (Denver, Colorado, USA)!

📢 [2026-01-20] Semantic segmentation code and configurations using the nnUNet framework are now released!

SPECTRE 👻👻👻

PyPI Version Python Versions Downloads per Month License Model weights Paper

SPECTRE architecture and pretraining strategies

SPECTRE (Self-Supervised & Cross-Modal Pretraining for CT Representation Extraction) is a Transformer-based foundation model for 3D Computed Tomography (CT) scans, trained using self-supervised learning (SSL) and cross-modal vision–language alignment (VLA). It provides rich and generalizable representations from medical imaging data, which can be fine-tuned for downstream tasks such as segmentation, classification, and anomaly detection.

SPECTRE has been trained on a large cohort of open-source CT scans of the human abdomen and thorax, as well as paired radiology reports and Electronic Health Record data, enabling it to capture representations that generalize across datasets and clinical settings.

This repository provides pretrained SPECTRE models together with tools for fine-tuning and evaluation.

🧠 Pretrained Models

The pretrained SPECTRE model can easily be imported as follows:

from spectre import SpectreImageFeatureExtractor, MODEL_CONFIGS
import torch

config = MODEL_CONFIGS['spectre-large-pretrained']
model = SpectreImageFeatureExtractor.from_config(config)
model.eval()

# Dummy input: (batch, crops, channels, height, width, depth)
# For a (3 x 3 x 4) grid of (128 x 128 x 64) CT patches -> Total scan size (384 x 384 x 256)
x = torch.randn(1, 36, 1, 128, 128, 64)
with torch.no_grad():
    features = model(x, grid_size=(3, 3, 4))
print("Features shape:", features.shape)

Alternatively, you can download the weights of the separate components through HuggingFace using the following links:

Architecture Input Modality Pretraining Objective Model Weights
SPECTRE-ViT-Local CT crops SSL Link
SPECTRE-ViT-Local CT crops SSL + VLA Link
SPECTRE-ViT-Global Embedded CT crops VLA Link
Qwen3-Embedding-0.6B LoRA Text (radiology) VLA Link

🩻 Segmentation (nnUNet)

If you're looking for a nnUNet-based segmentation pipeline that uses SPECTRE as the backbone, see: https://github.com/cviviers/nnUNet

📂 Repository Contents

This repository is organized as follows:

  • 🚀 src/spectre/ – Contains the core package, including:

    • Pretraining methods
    • Model architectures
    • Data handling and transformations
  • 🛠️ src/spectre/configs/ – Stores configuration files for different training settings.

  • 🔬 experiments/ – Includes Python scripts for running various pretraining and downstream experiments.

  • 🐳 Dockerfile – Defines the environment for running a local version of SPECTRE inside a container.

⚙️ Setting Up the Environment

To get up and running with SPECTRE, simply install our package using pip:

pip install spectre-fm

or install the latest updates directly from GitHub:

pip install git+https://github.com/cclaess/SPECTRE.git

🐳 Building and Using Docker

To facilitate deployment and reproducibility, SPECTRE can be run using Docker. This allows you to set up a fully functional environment without manually installing dependencies using your own local copy of spectre.

Building the Docker Image

First, ensure you have Docker installed. Then, clone and navigate to the repository to build the image:

git clone https://github.com/cclaess/SPECTRE
cd SPECTRE
docker build -t spectre-fm .

Running Experiments Inside Docker

Once the image is built, you can start a container and execute scripts inside it. For example, to run a DINO pretraining experiment:

docker run --gpus all --rm -v "$(pwd):/mnt" spectre-fm python3 experiments/pretraining/pretrain_dino.py --config_file spectre/configs/dino_default.yaml --output_dir /mnt/outputs/pretraining/dino/
  • --gpus all enables GPU acceleration if available.
  • --rm removes the container after execution.
  • -v $(pwd):/mnt mounts the current directory inside the container.

⚖️ License

  • Code: MIT — see LICENSE (permissive; commercial use permitted).
  • Pretrained model weights: CC-BY-NC-SA — non-commercial share-alike. The weights and any derivative models that include these weights are NOT cleared for commercial use. See LICENSE_MODELS for details and the precise license text.

Note: the pretrained weights are subject to the original dataset licenses. Users intending to use SPECTRE in commercial settings should verify dataset and model licensing and obtain any required permissions.

📜 Citation

If you use SPECTRE in your research or wish to cite it, please use the following BibTeX entry of our preprint:

@misc{claessens_scaling_2025,
  title = {Scaling {Self}-{Supervised} and {Cross}-{Modal} {Pretraining} for {Volumetric} {CT} {Transformers}},
  url = {http://arxiv.org/abs/2511.17209},
  doi = {10.48550/arXiv.2511.17209},
  author = {Claessens, Cris and Viviers, Christiaan and D'Amicantonio, Giacomo and Bondarev, Egor and Sommen, Fons van der},
  year={2025},
}

🤝 Acknowledgements

This project builds upon prior work in self-supervised learning, medical imaging, and transformer-based representation learning. We especially acknowledge MONAI for their awesome framework and the timm & lightly Python libraries for providing 2D PyTorch models (timm) and object-oriented self-supervised learning methods (lightly), from which we adapted parts of the code for 3D.

Star History Chart

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

spectre_fm-0.1.2.tar.gz (1.2 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

spectre_fm-0.1.2-py3-none-any.whl (126.7 kB view details)

Uploaded Python 3

File details

Details for the file spectre_fm-0.1.2.tar.gz.

File metadata

  • Download URL: spectre_fm-0.1.2.tar.gz
  • Upload date:
  • Size: 1.2 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.2

File hashes

Hashes for spectre_fm-0.1.2.tar.gz
Algorithm Hash digest
SHA256 d015a3f72960666cbe172ff2c5b7ed74d54d4a563e394c6e72fad82ce63aee50
MD5 1b0aa13f96e78c42c05918d0a73a572f
BLAKE2b-256 e78c201781645dbd49effc0588c41c0968651cd1b71f5e999fc1cce3f675b1a5

See more details on using hashes here.

File details

Details for the file spectre_fm-0.1.2-py3-none-any.whl.

File metadata

  • Download URL: spectre_fm-0.1.2-py3-none-any.whl
  • Upload date:
  • Size: 126.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.13.2

File hashes

Hashes for spectre_fm-0.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 9c2021c90c480bb935a84d87a862dbfa82b30146343f9bb318fe33b678bf2534
MD5 4fe1fa05023854f825f606d4edc3bdd1
BLAKE2b-256 97015d8589289b1d338e39b766a109c82da5c0efab91efb17a3818dc33aedb17

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page