Skip to main content

Multitask learning framework for medical data

Project description

MedicalMultitaskModeling

The project enables training foundational medical imaging models using multi-task learning.

The software is provided on "AS IS" basis, i.e. it comes without any warranty, express or implied including (without limitations) any warranty of merchantability and warranty of fitness for a particular purpose.

Please note that this software is licensed under the LICENSE FOR SCIENTIFIC NON-COMMERCIAL RESEARCH PURPOSES, see license.md.

Installation:

To install the project and its dependencies, run the following command:

pip install medicalmultitaskmodeling
# Including extra dependency groups "interactive" and "testing" recommended for development:
pip install medicalmultitaskmodeling[interactive, testing]
# The latest main branch from https://github.com/FraunhoferMEVIS/MedicalMultitaskModeling
pip install git+https://github.com/FraunhoferMEVIS/MedicalMultitaskModeling.git
# A specific commit
pip install git+https://github.com/FraunhoferMEVIS/MedicalMultitaskModeling.git@<commit-hash>

# Verify system dependencies
import cv2; import torch; assert torch.cuda.is_available()
# Verify MMM
from mmm.interactive import *

You can check the pyproject.toml file to see all available extras.

Usage

# See our tutorial notebooks in the Quick Start Guide for more details.
from mmm.labelstudio_ext.NativeBlocks import NativeBlocks, MMM_MODELS, DEFAULT_MODEL
model = NativeBlocks(MMM_MODELS[DEFAULT_MODEL], device_identifier="cuda:0")

import torch; import torch.nn as nn
with torch.inference_mode():
    feature_pyramid: list[torch.Tensor] = model["encoder"](torch.rand(1, 3, 224, 224).to(model.device))
    hidden_vector = nn.Flatten(1)(model["squeezer"](feature_pyramid)[1])

Quickstart Guide

To begin training multi-task models, you can use our quickstart.ipynb getting started notebook. We recommend using our directory layout as created using our template as following:

  1. Install the 'copier' package using pipx:
# We use pipx to install copier in isolated environment. We use copier to scaffold the code for an experiment. By the time of writing, we used copier version 9.2.0
pipx install copier
  1. Use the template from a local 'medicalmultitaskmodeling' checkout to create a scaffold for your experiment.
# To create a new experiment next to your checkout of medicalmultitaskmodeling
copier copy ../medicalmultitaskmodeling/copier_template/ .

Using VSCode development container

  1. Open the development container using VSCode via the command @command:remote-containers.rebuildAndReopenInContainer. This requires the extension ms-vscode-remote.remote-containers.

  2. Inside the development container, run the VSCode task (@command:workbench.action.tasks.runTask) Prepare environment which will reload the window after its done.

  3. Run the quickstart.ipynb notebook to start your training and learn about this project.

Using virtualenv

If you prefer to use a virtual environment instead of a container, follow these steps:

  1. Create a new virtual environment in your template directory: virtualenv venv
  2. Activate the virtual environment using source venv/bin/activate. For Windows ./venv/Scripts/activate.
  3. Install the 'medicalmultitaskmodeling' package and its dependencies in the virtual environment:
pip install medicalmultitaskmodeling[interactive]
# Or with a local checkout, and using Jupyterlab:
pip install /your/local/path/medicalmultitaskmodeling[interactive] jupyterlab
  1. Run the quickstart.ipynb notebook. We recommend opening the folder in VSCode. Alternatively, you can use LOCAL_DEV_ENV=True jupyter lab and visit the link starting with http://localhost:8888/.

System dependencies

We strongly recommend using MMM with our public Docker images. If that is not possible, setup GPU support, check with nvidia-smi and run:

sudo apt install python3-opencv -y

Development

  1. Start poetry environment poetry init
  2. Add the package as a git submodule git submodule add <repository-url>
  3. Add the package, including interactive and dependencies for adding the tests: poetry add ./medicalmultitaskmodeling/ --editable -E interactive -E testing
  4. For practical examples on how to get started with development, refer to one of our projects, such as UMedPT.

Docker images

# Verify your GPU Docker setup using the hello-world image:
docker run --rm --gpus=all hello-world
# Only system requirements:
MMMVERSION=$(poetry version -s) && docker pull hub.cc-asp.fraunhofer.de/medicalmultitaskmodeling/mmm-base:$MMMVERSION
# Verify with
MMMVERSION=$(poetry version -s) && docker run --rm -it --gpus=all hub.cc-asp.fraunhofer.de/medicalmultitaskmodeling/mmm-base:$MMMVERSION nvidia-smi
# With dependencies pre-installed:
MMMVERSION=$(poetry version -s) && docker pull hub.cc-asp.fraunhofer.de/medicalmultitaskmodeling/mmm-stack:$MMMVERSION

Start local infrastructure and inference API with Docker Compose

# Profiles:
# - inference runs MMM inference container
# - storage runs network drive based on S3 and JuiceFS
# - annotation runs Labelstudio annotation GUI
# - empaia runs infrastructure for gigapixel imaging
MMMVERSION=$(poetry version -s) docker compose --profile inference --profile storage --profile annotation --profile empaia up --build --remove-orphans -d

Citation

If you use this project, please cite our work:

@misc{schäfer2023overcoming,
      title={Overcoming Data Scarcity in Biomedical Imaging with a Foundational Multi-Task Model}, 
      author={Raphael Schäfer and Till Nicke and Henning Höfener and Annkristin Lange and Dorit Merhof and Friedrich Feuerhake and Volkmar Schulz and Johannes Lotz and Fabian Kiessling},
      year={2023},
      eprint={2311.09847},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Repository Structure

For more detailed information, please refer to the docstrings within each directory.

  • torch_ext: Contains Torch utilities that, while not specific to multi-task learning, can simplify its implementation. This includes our caching utilities.
  • task_sampling: Provides utilities for enumerating tasks in a way that integrates with PyTorch.
  • inference_api: starting point to our inference and few-shot-training FastAPI

data_loading

This directory contains tools for loading medical data and annotations, supporting formats such as NIfTI, DICOM, and GeoJSON. It also contains the annotation type specific dataset wrappers such as SemSegDataset, responsible for data verification and visualization.

interactive

This directory has been restructured to allow for easy importing in interactive environments like Jupyter. For instance, you can import several modules with a single line:

from mmm.interactive import blocks, configs as cfs, data, tasks, training, pipes

logging

Here you'll find utilities that integrate with our logging and visualization tools.

mtl_modules

This directory houses multi-task learning types, such as PyramidEncoder, and specific tasks.

neural

This directory contains PyTorch modules that are not based on our multi-task learning types.

optimization

This is the home of MTLOptimizer. It integrates several PyTorch optimizers with our training strategy and employs the ZeroRedundancyOptimizer strategy for distributed training.

resources

This directory contains static files, like HTML templates for logging.

trainer

The Loop class, used by the MtlTrainer class to execute multi-task learning, is located here.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

medicalmultitaskmodeling-1.0.4.tar.gz (174.9 kB view details)

Uploaded Source

Built Distribution

medicalmultitaskmodeling-1.0.4-py3-none-any.whl (222.7 kB view details)

Uploaded Python 3

File details

Details for the file medicalmultitaskmodeling-1.0.4.tar.gz.

File metadata

  • Download URL: medicalmultitaskmodeling-1.0.4.tar.gz
  • Upload date:
  • Size: 174.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.3 CPython/3.10.12 Linux/6.5.0-41-generic

File hashes

Hashes for medicalmultitaskmodeling-1.0.4.tar.gz
Algorithm Hash digest
SHA256 62f1b84033dd7e0110dbc603a1f886236d12288c829a2622b84a1c78d0962f2f
MD5 f1a067bf5872841dd6dffb82c52c70e5
BLAKE2b-256 4ee07d3567617af79f9b4e21bfc080eaeb1cb0564b2c6ad007191ed9e02886e9

See more details on using hashes here.

File details

Details for the file medicalmultitaskmodeling-1.0.4-py3-none-any.whl.

File metadata

File hashes

Hashes for medicalmultitaskmodeling-1.0.4-py3-none-any.whl
Algorithm Hash digest
SHA256 fc93973ea3ecbfbaf921a7eb88a46d86d10db14eff9a6f0e59ebb25488c2b4a9
MD5 e9b75aff14aad7ef8d5ed19b55f68d6a
BLAKE2b-256 354af04364e921a233031480e6c49e24bef707f0589753eeb98a0822a46d0f62

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page