DAfNe TrainEr - PyTorch-based model trainer for the Dafne segmentation framework
Project description
DANTE — DAfNe TrainEr
PyTorch-based model trainer for the Dafne segmentation framework. Trains 2D and 3D U-Net-style models on medical images and serializes them into the .model format used by dafne-dl.
Installation
pip install dante-trainer
Requires Python >= 3.9. A CUDA-capable GPU is strongly recommended for training.
Entry points
| Command | Description |
|---|---|
dante |
Launch the PyQt5 GUI trainer |
dante_train |
Command-line training interface |
Input data format
Training data must be .npz files, each containing:
data: the image volume (numpy array)mask_<label>: one binary mask per anatomical structure (e.g.mask_muscle,mask_femur)resolution: voxel spacing array
The data folder is scanned recursively. All .npz files found are split into train and validation sets automatically.
Output
All files produced by a training run are saved inside a dedicated folder named after the model, created automatically under the output directory. For example, given --output /models/mymodel.model, the following structure is created:
/models/mymodel/
mymodel.model # final serialized model (DynamicTorchModel format)
mymodel_best_model.pth # best checkpoint by validation Dice (removed after packaging)
mymodel.csv # per-epoch metrics log
logs/
train/ # TensorBoard training logs
val/ # TensorBoard validation logs
The .model file embeds:
- model weights
- network architecture metadata (model name, spatial dims, patch size, spacing, etc.)
- training metadata
- EWC snapshot (Fisher Information Matrix + parameter snapshot, used for continual learning)
- a dependency hint pointing to
dafne-monai-inferencefor inference-time use
CLI usage
Training from scratch
dante_train --data <data_dir> --output <output_path> [options]
| Argument | Short | Default | Description |
|---|---|---|---|
--data |
-d |
required | Path to the folder containing training data |
--output |
-o |
required | Output path for the .model file |
--epochs |
50 | Number of training epochs | |
--batch-size |
2 | Batch size | |
--lr |
0.001 | Learning rate | |
--3d |
off | Train a 3D model (default: 2D) | |
--dynunet |
off | Use Dynamic U-Net with auto-computed parameters | |
--levels |
5 | Number of U-Net encoder/decoder levels | |
--kernel-size |
3 | Convolution kernel size | |
--conv-layers |
2 | Number of convolutional layers per level | |
--early-stopping |
off | Stop training when validation loss stops improving | |
--mixed-precision |
off | Enable AMP (automatic mixed precision) | |
--scheduler |
off | Enable learning rate scheduler |
Example:
dante_train -d /data/training_set -o /models/my_model.model --epochs 100 --lr 0.0005 --early-stopping
Fine-tuning an existing model
Pass --pretrained with the path to an existing .model file, and set --mode to finetune, lora, or continual.
dante_train --data <data_dir> --output <output_path> --pretrained <model_path> --mode finetune [options]
| Argument | Default | Description |
|---|---|---|
--pretrained |
none | Path to a pretrained .model file |
--mode |
scratch |
Training mode: scratch, finetune, lora, or continual |
--freeze-degree |
0.5 | Fraction of layers to freeze (used with --mode finetune) |
--gradual-unfreeze |
off | Gradually unfreeze frozen layers during training |
--lora-rank |
8 | LoRA rank (used with --mode lora) |
--lora-alpha |
16 | LoRA alpha scaling factor (used with --mode lora) |
--lambda-reg |
1.0 | EWC regularization weight (used with --mode continual) |
Example — fine-tuning with 70% of layers frozen:
dante_train -d /data/new_data -o /models/finetuned.model --pretrained /models/base.model \
--mode finetune --freeze-degree 0.7 --gradual-unfreeze --epochs 30
Example — LoRA adaptation:
dante_train -d /data/new_data -o /models/lora.model --pretrained /models/base.model \
--mode lora --lora-rank 8 --lora-alpha 16 --epochs 30
Training modes
- From scratch (
--mode scratch): network architecture and preprocessing are derived automatically from dataset statistics (median spacing, median shape, label count). - Fine-tuning (
--mode finetune): loads an existing.modelfile and resumes training, preserving the original architecture. Supports partial freezing and gradual unfreezing. - LoRA (
--mode lora): injects low-rank adapter layers into the frozen base model. Only adapter weights are trained. Useful for adaptation with very little data. - Continual learning (
--mode continual): fine-tunes on a new task while penalizing changes to weights that were important for the previous task, using Elastic Weight Consolidation (EWC). Requires--pretrainedpointing to a.modelfile produced by a prior training run.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file dante_trainer-1.0.3b1.tar.gz.
File metadata
- Download URL: dante_trainer-1.0.3b1.tar.gz
- Upload date:
- Size: 61.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
607b32fd8972f135292f1221258098c664e0607b74d54dddd41587bf3c7f79e1
|
|
| MD5 |
10b57f32f851534ef82739cc139a3b50
|
|
| BLAKE2b-256 |
7d25e90225cdbc564bbf420ec051b8d59e32a69813fc05caa7b7eac04dc29aa0
|
File details
Details for the file dante_trainer-1.0.3b1-py3-none-any.whl.
File metadata
- Download URL: dante_trainer-1.0.3b1-py3-none-any.whl
- Upload date:
- Size: 69.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
9f12e552f459f626198bffa62af4fe8f26ac07fec53e6f4b4e557a561c78be3b
|
|
| MD5 |
546303c765f4409f3ebc44019fee9fc0
|
|
| BLAKE2b-256 |
893aef52c04949fdf437919ea9c89135e4769eff28c169b6e1fc3f28c629681e
|