Microsoft Health Futures package to work with multi-modal health data
Project description
HI-ML Multimodal Toolbox
This toolbox provides models for multimodal health data. The code is available on GitHub and Hugging Face 🤗.
Getting started
The best way to get started is by running the phrase grounding notebook and the examples. All the dependencies will be installed upon execution, so Python 3.9 and Jupyter are the only requirements to get started.
The notebook can also be run on Binder, without the need to download any code or install any libraries:
Installation
The latest version can be installed using pip
:
pip install --upgrade hi-ml-multimodal
Development
For development, it is recommended to clone the repository and set up the environment using conda
:
git clone https://github.com/microsoft/hi-ml.git
cd hi-ml-multimodal
make env
This will create a conda
environment named multimodal
and install all the dependencies to run and test the package.
You can visit the API documentation for a deeper understanding of our tools.
Examples
For zero-shot classification of images using text prompts, please refer to the example script that utilises a small subset of Open-Indiana CXR dataset for pneumonia detection in chest X-ray images. Please note that the examples and models are not intended for deployed use cases (commercial or otherwise), which is currently out-of-scope.
Hugging Face 🤗
While the GitHub repository provides examples and pipelines to use our models, the weights and model cards are hosted on Hugging Face 🤗.
Credit
If you use our code or models in your research, please cite our recent ECCV and CVPR papers:
Boecking, B., Usuyama, N. et al. (2022). Making the Most of Text Semantics to Improve Biomedical Vision–Language Processing. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13696. Springer, Cham. https://doi.org/10.1007/978-3-031-20059-5_1
Bannur, S., Hyland, S., et al. (2023). Learning to Exploit Temporal Structure for Biomedical Vision-Language Processing. In: CVPR 2023.
BibTeX
@InProceedings{10.1007/978-3-031-20059-5_1,
author="Boecking, Benedikt and Usuyama, Naoto and Bannur, Shruthi and Castro, Daniel C. and Schwaighofer, Anton and Hyland, Stephanie and Wetscherek, Maria and Naumann, Tristan and Nori, Aditya and Alvarez-Valle, Javier and Poon, Hoifung and Oktay, Ozan",
editor="Avidan, Shai and Brostow, Gabriel and Ciss{\'e}, Moustapha and Farinella, Giovanni Maria and Hassner, Tal",
title="Making the Most of Text Semantics to Improve Biomedical Vision--Language Processing",
booktitle="Computer Vision -- ECCV 2022",
year="2022",
publisher="Springer Nature Switzerland",
address="Cham",
pages="1--21",
isbn="978-3-031-20059-5"
}
@inproceedings{bannur2023learning,
title={Learning to Exploit Temporal Structure for Biomedical Vision{\textendash}Language Processing},
author={Shruthi Bannur and Stephanie Hyland and Qianchu Liu and Fernando P\'{e}rez-Garc\'{i}a and Maximilian Ilse and Daniel C. Castro and Benedikt Boecking and Harshita Sharma and Kenza Bouzid and Anja Thieme and Anton Schwaighofer and Maria Wetscherek and Matthew P. Lungren and Aditya Nori and Javier Alvarez-Valle and Ozan Oktay},
booktitle={Conference on Computer Vision and Pattern Recognition 2023},
year={2023},
url={https://openreview.net/forum?id=5jScn5xsbo}
}
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for hi_ml_multimodal-0.2.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | f9a0f4f897e6ae4f4f0574c4fe037d1b3902e834ebc761c01a153e6e09dc2745 |
|
MD5 | 79c6fdbf6420851af3cb71b6e07be855 |
|
BLAKE2b-256 | 073118bbc85891b3e576766a8556222c148fa438ef8c5a60d317c3e58c3457f6 |