Skip to main content

A python package for deep learing based image to image transformation

Project description

MMV Im2Im Transformation

Build Status

A generic python package for deep learning based image-to-image transformation in biomedical applications

The main branch will be further developed in order to be able to use the latest state of the art techniques and methods in the future. To reproduce the results of our manuscript, we refer to the branch paper_version. (We are actively working on the documentation and tutorials. Submit a feature request if there is anything you need.)


Overview

The overall package is designed with a generic image-to-image transformation framework, which could be directly used for semantic segmentation, instance segmentation, image restoration, image generation, labelfree prediction, staining transformation, etc.. The implementation takes advantage of the state-of-the-art ML engineering techniques for users to focus on researches without worrying about the engineering details. In our pre-print arxiv link, we demonstrated the effectiveness of MMV_Im2Im in more than ten different biomedical problems/datasets.

  • For computational biomedical researchers (e.g., AI algorithm development or bioimage analysis workflow development), we hope this package could serve as the starting point for their specific problems, since the image-to-image "boilerplates" can be easily extended further development or adapted for users' specific problems.
  • For experimental biomedical researchers, we hope this work provides a comprehensive view of the image-to-image transformation concept through diversified examples and use cases, so that deep learning based image-to-image transformation could be integrated into the assay development process and permit new biomedical studies that can hardly be done only with traditional experimental methods

Installation

Before starting, we recommend to create a new conda environment or a virtual environment with Python 3.9+.

Please note that the proper setup of hardware is beyond the scope of this pacakge. This package was tested with GPU/CPU on Linux/Windows and CPU on MacOS. [Special note for MacOS users: Directly pip install in MacOS may need additional setup of xcode.]

Install MONAI

To reproduce our results, we need to install MONAI's code version of a specific commit. To do this:

git clone https://github.com/Project-MONAI/MONAI.git
cd ./MONAI
git checkout 37b58fcec48f3ec1f84d7cabe9c7ad08a93882c0
pip install .

We will remove this step for the main branch in the future to ensure a simplified installation of our tool.

Install MMV_Im2Im for basic usage:

(For users only using this package, not planning to change any code or make any extension):

Option 1: core functionality only pip install mmv_im2im
Option 2: advanced functionality (core + logger) pip install mmv_im2im[advance]
Option 3: to reproduce paper: pip install mmv_im2im[paper]
Option 4: install everything: pip install mmv_im2im[all]

For MacOS users, additional ' ' marks are need when using installation tags in zsh. For example, pip install mmv_im2im[paper] should be pip install mmv_im2im'[paper]' in MacOS.

Install MMV_Im2Im for customization or extension:

git clone https://github.com/MMV-Lab/mmv_im2im.git
cd mmv_im2im
pip install -e .[all]

Note: The -e option is the so-called "editable" mode. This will allow code changes taking effect immediately. The installation tags, advance, paper, all, are be selected based on your needs.

(Optional) Install using Docker

It is also possible to use our package through docker. The installation tutorial is here. Specifically, for MacOS users, please refer to this tutorial.

(Optional) Use MMV_Im2Im with Google Colab

We provide a web-based demo, if cloud computing is preferred. you can Open a 2D labelfree DEMO in Google Colab. The same demo can de adapted for different applications.

Quick start

You can try out on a simple example following the quick start guide

Basically, you can specify your training configuration in a yaml file and run training with run_im2im --config /path/to/train_config.yaml. Then, you can specify the inference configuration in another yaml file and run inference with run_im2im --config /path/to/inference_config.yaml. You can also run the inference as a function with the provided API. This will be useful if you want to run the inference within another python script or workflow. Here is an example:

from pathlib import Path
from aicsimageio import AICSImage
from aicsimageio.writers import OmeTiffWriter
from mmv_im2im.configs.config_base import ProgramConfig, parse_adaptor, configuration_validation
from mmv_im2im import ProjectTester

# load the inference configuration
cfg = parse_adaptor(config_class=ProgramConfig, config="./paper_configs/semantic_seg_2d_inference.yaml")
cfg = configuration_validation(cfg)

# define the executor for inference
executor = ProjectTester(cfg)
executor.setup_model()
executor.setup_data_processing()

# get the data, run inference, and save the result
fn = Path("./data/img_00_IM.tiff")
img = AICSImage(fn).get_image_data("YX", Z=0, C=0, T=0)
# or using delayed loading if the data is large
# img = AICSImage(fn).get_image_dask_data("YX", Z=0, C=0, T=0)
seg = executor.process_one_image(img)
OmeTiffWriter.save(seg, "output.tiff", dim_orders="YX")

Tutorials, examples, demonstrations and documentations

The overall package aims to achieve both simplicty and flexibilty with the modularized image-to-image boilerplates. To help different users to best use this package, we provide documentations from four different aspects:

Contribute models to BioImage Model Zoo

We highly appreciate the BioImage Model Zoo's initiative to provide a comprehensive collection of pre-trained models for a wide range of applications. To make MMV_Im2Im trained models available as well, the first step involves extracting the state_dict from the PyTorch Lightning checkpoint. This can be done via:

import torch

ckpt_path = "./lightning_logs/version_0/checkpoints/last.ckpt"
checkpoint = torch.load(ckpt_path, map_location=torch.device('cpu'))
state_dict = checkpoint['state_dict']
torch.save(state_dict, "./state_dict.pt")

All further steps to provide models can be found in the official documentation.

Development

See CONTRIBUTING.md for information related to developing the code.

MIT license

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

mmv_im2im-0.5.2.tar.gz (61.2 kB view details)

Uploaded Source

Built Distribution

mmv_im2im-0.5.2-py2.py3-none-any.whl (76.6 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file mmv_im2im-0.5.2.tar.gz.

File metadata

  • Download URL: mmv_im2im-0.5.2.tar.gz
  • Upload date:
  • Size: 61.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.9.19

File hashes

Hashes for mmv_im2im-0.5.2.tar.gz
Algorithm Hash digest
SHA256 572c4cb2ed05dd97ce5984a0cca5d7db97e0470e6ba5c73614562512d8778474
MD5 28dea21db0ea3badae8df0ce70d75bd1
BLAKE2b-256 5c606e44430291341dec93aad48b4eb2163c807ce17ff1b97f64ce78bc43630f

See more details on using hashes here.

File details

Details for the file mmv_im2im-0.5.2-py2.py3-none-any.whl.

File metadata

  • Download URL: mmv_im2im-0.5.2-py2.py3-none-any.whl
  • Upload date:
  • Size: 76.6 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.9.19

File hashes

Hashes for mmv_im2im-0.5.2-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 1626b60988b40faf98f51de720c8beee320d9af4b15e224ecb6dda1f58d15021
MD5 7f8fce2b817ed7ee29cd065a9c935423
BLAKE2b-256 1c1fda7c6706d79fcbfd906ddba306d42759badfa6d95ade995badece5634f20

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page