Skip to main content

No project description provided

Project description


title: LISA On Cuda emoji: ๐Ÿ“Š colorFrom: yellow colorTo: red sdk: docker pinned: false

exec jupyter on the remote server with port forwarding on localhost

  1. checkout repo, install venv with jupyter
  2. port forwarding in localhost wiht private key: ssh -i ~/.ssh/id_ecdsa_saturncloud trincuz@ssh.community.saturnenterprise.io -L 8889:localhost:8889 -N -f
  3. start the jupyter-lab server
  4. connect to page in localhost

Commands to work on saturncloud after clone and git lfs install

cd ~/workspace/lisa-on-cuda/
rm -rf lisa_venv 
python3 -m venv lisa_venv
ln -s lisa_venv/ venv
source  venv/bin/activate
pip --version
which python
python -m pip install pip wheel --upgrade
python -m pip install pytest pytest-cov jupyterlab
python -m pip install -r requirements.txt
nohup jupyter-lab &
tail -F nohup.out

Jupyterlab Howto

To run the test.ipynb notebook you should already:

Hardware requirements

  • an nvidia gpu with 10 or 12GB of memory (a T4 should suffice)
  • at least 16GB of system ram

Gradio Open in OpenXLab

LISA: Reasoning Segmentation via Large Language Model

LISA: Large Language Instructed Segmentation Assistant

Input Output Input Output
"Who was the president of the US in this image? Please output segmentation mask and explain the reason." "Sure, the segmentation result is [SEG]. The President of the United States in the image is President Obama.โ€ "Who was the president of the US in this image? Please output segmentation mask and explain why." "Sure, [SEG]. In the image, the President of the United States is President Trump."
"What can make the woman stand higher? Please output segmentation mask and explain why." "Sure, [SEG]. The woman is standing higher by using a ladder..." "Can you segment the camera lens that is more suitable for photographing nearby objects in this image?" "Sure, [SEG]."
"Can you segment the unusual part in this image and explain why." "Sure, it is [SEG]. In the image, the unusual part is the dog wearing a reindeer antler headband..." "Where to wash hands in this image? Please output segmentation mask." "Sure, [SEG]."
"Can you segment the founder of Alibaba in this image and explain why?" "Sure, it is [SEG]. In the image, the man wearing a red tie and sitting on one of the chairs is Jack Ma, the co-founder of Alibaba Group..." "Please segment Lisa in this figure." "Sure, [SEG]."

News

LISA: Reasoning Segmentation via Large Language Model [Paper]
Xin Lai, Zhuotao Tian, Yukang Chen, Yanwei Li, Yuhui Yuan, Shu Liu, Jiaya Jia

Abstract

In this work, we propose a new segmentation task --- reasoning segmentation. The task is designed to output a segmentation mask given a complex and implicit query text. We establish a benchmark comprising over one thousand image-instruction pairs, incorporating intricate reasoning and world knowledge for evaluation purposes. Finally, we present LISA: Large-language Instructed Segmentation Assistant, which inherits the language generation capabilities of the multi-modal Large Language Model (LLM) while also possessing the ability to produce segmentation masks. For more details, please refer to the paper.

Highlights

LISA unlocks the new segmentation capabilities of multi-modal LLMs, and can handle cases involving:

  1. complex reasoning;
  2. world knowledge;
  3. explanatory answers;
  4. multi-turn conversation.

LISA also demonstrates robust zero-shot capability when trained exclusively on reasoning-free datasets. In addition, fine-tuning the model with merely 239 reasoning segmentation image-instruction pairs results in further performance enhancement.

Experimental results

Installation

pip install -r requirements.txt
pip install flash-attn --no-build-isolation

Training

Training Data Preparation

The training data consists of 4 types of data:

  1. Semantic segmentation datasets: ADE20K, COCO-Stuff, Mapillary, PACO-LVIS, PASCAL-Part, COCO Images

    Note: For COCO-Stuff, we use the annotation file stuffthingmaps_trainval2017.zip. We only use the PACO-LVIS part in PACO. COCO Images should be put into the dataset/coco/ directory.

  2. Referring segmentation datasets: refCOCO, refCOCO+, refCOCOg, refCLEF (saiapr_tc-12)

    Note: the original links of refCOCO series data are down, and we update them with new ones. If the download speed is super slow or unstable, we also provide a OneDrive link to download. You must also follow the rules that the original datasets require.

  3. Visual Question Answering dataset: LLaVA-Instruct-150k

  4. Reasoning segmentation dataset: ReasonSeg

Download them from the above links, and organize them as follows.

โ”œโ”€โ”€ dataset
โ”‚         โ”œโ”€โ”€ ade20k
โ”‚         โ”‚         โ”œโ”€โ”€ annotations
โ”‚         โ”‚         โ””โ”€โ”€ images
โ”‚         โ”œโ”€โ”€ coco
โ”‚         โ”‚         โ””โ”€โ”€ train2017
โ”‚         โ”‚             โ”œโ”€โ”€ 000000000009.jpg
โ”‚         โ”‚             โ””โ”€โ”€ ...
โ”‚         โ”œโ”€โ”€ cocostuff
โ”‚         โ”‚         โ””โ”€โ”€ train2017
โ”‚         โ”‚             โ”œโ”€โ”€ 000000000009.png
โ”‚         โ”‚             โ””โ”€โ”€ ...
โ”‚         โ”œโ”€โ”€ llava_dataset
โ”‚         โ”‚         โ””โ”€โ”€ llava_instruct_150k.json
โ”‚         โ”œโ”€โ”€ mapillary
โ”‚         โ”‚         โ”œโ”€โ”€ config_v2.0.json
โ”‚         โ”‚         โ”œโ”€โ”€ testing
โ”‚         โ”‚         โ”œโ”€โ”€ training
โ”‚         โ”‚         โ””โ”€โ”€ validation
โ”‚         โ”œโ”€โ”€ reason_seg
โ”‚         โ”‚         โ””โ”€โ”€ ReasonSeg
โ”‚         โ”‚             โ”œโ”€โ”€ train
โ”‚         โ”‚             โ”œโ”€โ”€ val
โ”‚         โ”‚             โ””โ”€โ”€ explanatory
โ”‚         โ”œโ”€โ”€ refer_seg
โ”‚         โ”‚         โ”œโ”€โ”€ images
โ”‚         โ”‚         |   โ”œโ”€โ”€ saiapr_tc-12 
โ”‚         โ”‚         |   โ””โ”€โ”€ mscoco
โ”‚         โ”‚         |       โ””โ”€โ”€ images
โ”‚         โ”‚         |           โ””โ”€โ”€ train2014
โ”‚         โ”‚         โ”œโ”€โ”€ refclef
โ”‚         โ”‚         โ”œโ”€โ”€ refcoco
โ”‚         โ”‚         โ”œโ”€โ”€ refcoco+
โ”‚         โ”‚         โ””โ”€โ”€ refcocog
โ”‚         โ””โ”€โ”€ vlpart
โ”‚             โ”œโ”€โ”€ paco
โ”‚       โ”‚   โ””โ”€โ”€ annotations
โ”‚             โ””โ”€โ”€ pascal_part
โ”‚                 โ”œโ”€โ”€ train.json
โ”‚           โ””โ”€โ”€ VOCdevkit

Pre-trained weights

LLaVA

To train LISA-7B or 13B, you need to follow the instruction to merge the LLaVA delta weights. Typically, we use the final weights LLaVA-Lightning-7B-v1-1 and LLaVA-13B-v1-1 merged from liuhaotian/LLaVA-Lightning-7B-delta-v1-1 and liuhaotian/LLaVA-13b-delta-v1-1, respectively. For Llama2, we can directly use the LLaVA full weights liuhaotian/llava-llama-2-13b-chat-lightning-preview.

SAM ViT-H weights

Download SAM ViT-H pre-trained weights from the link.

Training

deepspeed --master_port=24999 train_ds.py \
  --version="PATH_TO_LLaVA" \
  --dataset_dir='./dataset' \
  --vision_pretrained="PATH_TO_SAM" \
  --dataset="sem_seg||refer_seg||vqa||reason_seg" \
  --sample_rates="9,3,3,1" \
  --exp_name="lisa-7b"

When training is finished, to get the full model weight:

cd ./runs/lisa-7b/ckpt_model && python zero_to_fp32.py . ../pytorch_model.bin

Merge LoRA Weight

Merge the LoRA weights of pytorch_model.bin, save the resulting model into your desired path in the Hugging Face format:

CUDA_VISIBLE_DEVICES="" python merge_lora_weights_and_save_hf_model.py \
  --version="PATH_TO_LLaVA" \
  --weight="PATH_TO_pytorch_model.bin" \
  --save_path="PATH_TO_SAVED_MODEL"

For example:

CUDA_VISIBLE_DEVICES="" python3 merge_lora_weights_and_save_hf_model.py \
  --version="./LLaVA/LLaVA-Lightning-7B-v1-1" \
  --weight="lisa-7b/pytorch_model.bin" \
  --save_path="./LISA-7B"

Validation

deepspeed --master_port=24999 train_ds.py \
  --version="PATH_TO_LISA_HF_Model_Directory" \
  --dataset_dir='./dataset' \
  --vision_pretrained="PATH_TO_SAM" \
  --exp_name="lisa-7b" \
  --eval_only

Note: the v1 model is trained using both train+val sets, so please use the v0 model to reproduce the validation results. (To use the v0 models, please first checkout to the legacy version repo with git checkout 0e26916.)

Inference

To chat with LISA-13B-llama2-v1 or LISA-13B-llama2-v1-explanatory: (Note that chat.py currently does not support v0 models (i.e., LISA-13B-llama2-v0 and LISA-13B-llama2-v0-explanatory), if you want to use the v0 models, please first checkout to the legacy version repo git checkout 0e26916.)

CUDA_VISIBLE_DEVICES=0 python chat.py --version='xinlai/LISA-13B-llama2-v1'
CUDA_VISIBLE_DEVICES=0 python chat.py --version='xinlai/LISA-13B-llama2-v1-explanatory'

To use bf16 or fp16 data type for inference:

CUDA_VISIBLE_DEVICES=0 python chat.py --version='xinlai/LISA-13B-llama2-v1' --precision='bf16'

To use 8bit or 4bit data type for inference (this enables running 13B model on a single 24G or 12G GPU at some cost of generation quality):

CUDA_VISIBLE_DEVICES=0 python chat.py --version='xinlai/LISA-13B-llama2-v1' --precision='fp16' --load_in_8bit
CUDA_VISIBLE_DEVICES=0 python chat.py --version='xinlai/LISA-13B-llama2-v1' --precision='fp16' --load_in_4bit

Hint: for 13B model, 16-bit inference consumes 30G VRAM with a single GPU, 8-bit inference consumes 16G, and 4-bit inference consumes 9G.

After that, input the text prompt and then the image path. For example๏ผŒ

- Please input your prompt: Where can the driver see the car speed in this image? Please output segmentation mask.
- Please input the image path: imgs/example1.jpg

- Please input your prompt: Can you segment the food that tastes spicy and hot?
- Please input the image path: imgs/example2.jpg

The results should be like:

Deployment

CUDA_VISIBLE_DEVICES=0 python app.py --version='xinlai/LISA-13B-llama2-v1 --load_in_4bit'
CUDA_VISIBLE_DEVICES=0 python app.py --version='xinlai/LISA-13B-llama2-v1-explanatory --load_in_4bit'

By default, we use 4-bit quantization. Feel free to delete the --load_in_4bit argument for 16-bit inference or replace it with --load_in_8bit argument for 8-bit inference.

Dataset

In ReasonSeg, we have collected 1218 images (239 train, 200 val, and 779 test). The training and validation sets can be download from this link.

Each image is provided with an annotation JSON file:

image_1.jpg, image_1.json
image_2.jpg, image_2.json
...
image_n.jpg, image_n.json

Important keys contained in JSON files:

- "text": text instructions.
- "is_sentence": whether the text instructions are long sentences.
- "shapes": target polygons.

The elements of the "shapes" exhibit two categories, namely "target" and "ignore". The former category is indispensable for evaluation, while the latter category denotes the ambiguous region and hence disregarded during the evaluation process.

We provide a script that demonstrates how to process the annotations:

python3 utils/data_processing.py

Besides, we leveraged GPT-3.5 for rephrasing instructions, so images in the training set may have more than one instructions (but fewer than six) in the "text" field. During training, users may randomly select one as the text query to obtain a better model.

Citation

If you find this project useful in your research, please consider citing:

@article{lai2023lisa,
  title={LISA: Reasoning Segmentation via Large Language Model},
  author={Lai, Xin and Tian, Zhuotao and Chen, Yukang and Li, Yanwei and Yuan, Yuhui and Liu, Shu and Jia, Jiaya},
  journal={arXiv preprint arXiv:2308.00692},
  year={2023}
}
@article{yang2023improved,
  title={An Improved Baseline for Reasoning Segmentation with Large Language Model},
  author={Yang, Senqiao and Qu, Tianyuan and Lai, Xin and Tian, Zhuotao and Peng, Bohao and Liu, Shu and Jia, Jiaya},
  journal={arXiv preprint arXiv:2312.17240},
  year={2023}
}

Acknowledgement

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

lisa_on_cuda-1.0.4.tar.gz (134.6 kB view hashes)

Uploaded Source

Built Distribution

lisa_on_cuda-1.0.4-py3-none-any.whl (164.5 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page