Contrastive Learning from Medical Images and Text.
Project description
MedCLIP
Wang, Zifeng and Wu, Zhenbang and Agarwal, Dinesh and Sun, Jimeng. (2022). MedCLIP: Contrastive Learning from Unpaired Medical Images and Texts. EMNLP'22.
Download MedCLIP
Before download MedCLIP, you need to find feasible torch version (with GPU) on https://pytorch.org/get-started/locally/.
Then, download MedCLIP by
pip install git+https://github.com/RyanWangZf/MedCLIP.git
# or
pip install medclip
Three lines to get pretrained MedCLIP models
from medclip import MedCLIPModel, MedCLIPVisionModelViT, MedCLIPVisionModel
# load MedCLIP-ResNet50
model = MedCLIPModel(vision_cls=MedCLIPVisionModel)
model.from_pretrained()
# load MedCLIP-ViT
model = MedCLIPModel(vision_cls=MedCLIPVisionModelViT)
model.from_pretrained()
As simple as using CLIP
from medclip import MedCLIPModel, MedCLIPVisionModelViT
from medclip import MedCLIPProcessor
from PIL import Image
# prepare for the demo image and texts
processor = MedCLIPProcessor()
image = Image.open('./example_data/view1_frontal.jpg')
inputs = processor(
text=["lungs remain severely hyperinflated with upper lobe emphysema",
"opacity left costophrenic angle is new since prior exam ___ represent some loculated fluid cavitation unlikely"],
images=image,
return_tensors="pt",
padding=True
)
# pass to MedCLIP model
model = MedCLIPModel(vision_cls=MedCLIPVisionModelViT)
model.from_pretrained()
model.cuda()
outputs = model(**inputs)
print(outputs.keys())
# dict_keys(['img_embeds', 'text_embeds', 'logits', 'loss_value', 'logits_per_text'])
MedCLIP for Prompt-based Classification
from medclip import MedCLIPModel, MedCLIPVisionModelViT
from medclip import MedCLIPProcessor
from medclip import PromptClassifier
processor = MedCLIPProcessor()
model = MedCLIPModel(vision_cls=MedCLIPVisionModelViT)
model.from_pretrained()
clf = PromptClassifier(model, ensemble=True)
clf.cuda()
# prepare input image
from PIL import Image
image = Image.open('./example_data/view1_frontal.jpg')
inputs = processor(images=image, return_tensors="pt")
# prepare input prompt texts
from medclip.prompts import generate_chexpert_class_prompts, process_class_prompts
cls_prompts = process_class_prompts(generate_chexpert_class_prompts(n=10))
inputs['prompt_inputs'] = cls_prompts
# make classification
output = clf(**inputs)
print(output)
# {'logits': tensor([[0.5154, 0.4119, 0.2831, 0.2441, 0.4588]], device='cuda:0',
# grad_fn=<StackBackward0>), 'class_names': ['Atelectasis', 'Cardiomegaly', 'Consolidation', 'Edema', 'Pleural Effusion']}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
MedCLIP-0.0.3.tar.gz
(24.9 kB
view details)
Built Distribution
MedCLIP-0.0.3-py3-none-any.whl
(27.9 kB
view details)
File details
Details for the file MedCLIP-0.0.3.tar.gz
.
File metadata
- Download URL: MedCLIP-0.0.3.tar.gz
- Upload date:
- Size: 24.9 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 667fc4fcdc56bc72a65c736c0da82ff55cab303e8d13e0ad4e5134d9e056c414 |
|
MD5 | 3df7d9fbe126e44beb05c3495499beb5 |
|
BLAKE2b-256 | 69bef76049aee28f13bbf91f13016db68f29b49e5748ed099a4a98d4f5fdab32 |
File details
Details for the file MedCLIP-0.0.3-py3-none-any.whl
.
File metadata
- Download URL: MedCLIP-0.0.3-py3-none-any.whl
- Upload date:
- Size: 27.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.8.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e9e6c7ba64880c5397b727054a9115eb24dba852530211774cdc7fa930c170a8 |
|
MD5 | e554792a13acad024e16dc5a0a0b114e |
|
BLAKE2b-256 | b735e0da1441c5782348b39a4be642068baaf99ff1f381e2b7f0722206300019 |