Skip to main content

Exporting Segment Anything models ONNX format

Project description

SAM Exporter - Now with Segment Anything 2!~~

Exporting Segment Anything, MobileSAM, and Segment Anything 2 into ONNX format for easy deployment.

PyPI version Downloads Downloads Downloads

Supported models:

  • Segment Anything 2 (Tiny, Small, Base, Large) - Note: Experimental. Only image input is supported for now.
  • Segment Anything (SAM ViT-B, SAM ViT-L, SAM ViT-H)
  • MobileSAM

Installation

Requirements:

  • Python 3.10+

From PyPi:

pip install torch==2.4.0 torchvision --index-url https://download.pytorch.org/whl/cpu
pip install samexporter

From source:

pip install torch==2.4.0 torchvision --index-url https://download.pytorch.org/whl/cpu
git clone https://github.com/vietanhdev/samexporter
cd samexporter
pip install -e .

Convert Segment Anything, MobileSAM to ONNX

original_models
   + sam_vit_b_01ec64.pth
   + sam_vit_h_4b8939.pth
   + sam_vit_l_0b3195.pth
   + mobile_sam.pt
   ...
  • Convert encoder SAM-H to ONNX format:
python -m samexporter.export_encoder --checkpoint original_models/sam_vit_h_4b8939.pth \
    --output output_models/sam_vit_h_4b8939.encoder.onnx \
    --model-type vit_h \
    --quantize-out output_models/sam_vit_h_4b8939.encoder.quant.onnx \
    --use-preprocess
  • Convert decoder SAM-H to ONNX format:
python -m samexporter.export_decoder --checkpoint original_models/sam_vit_h_4b8939.pth \
    --output output_models/sam_vit_h_4b8939.decoder.onnx \
    --model-type vit_h \
    --quantize-out output_models/sam_vit_h_4b8939.decoder.quant.onnx \
    --return-single-mask

Remove --return-single-mask if you want to return multiple masks.

  • Inference using the exported ONNX model:
python -m samexporter.inference \
    --encoder_model output_models/sam_vit_h_4b8939.encoder.onnx \
    --decoder_model output_models/sam_vit_h_4b8939.decoder.onnx \
    --image images/truck.jpg \
    --prompt images/truck_prompt.json \
    --output output_images/truck.png \
    --show

truck

python -m samexporter.inference \
    --encoder_model output_models/sam_vit_h_4b8939.encoder.onnx \
    --decoder_model output_models/sam_vit_h_4b8939.decoder.onnx \
    --image images/plants.png \
    --prompt images/plants_prompt1.json \
    --output output_images/plants_01.png \
    --show

plants_01

python -m samexporter.inference \
    --encoder_model output_models/sam_vit_h_4b8939.encoder.onnx \
    --decoder_model output_models/sam_vit_h_4b8939.decoder.onnx \
    --image images/plants.png \
    --prompt images/plants_prompt2.json \
    --output output_images/plants_02.png \
    --show

plants_02

Short options:

  • Convert all Segment Anything models to ONNX format:
bash convert_all_meta_sam.sh
  • Convert MobileSAM to ONNX format:
bash convert_mobile_sam.sh

Convert Segment Anything 2 to ONNX

cd original_models
bash download_sam2.sh

The models will be downloaded to the original_models folder:

original_models
    + sam2_hiera_tiny.pt
    + sam2_hiera_small.pt
    + sam2_hiera_base_plus.pt
    + sam2_hiera_large.pt
   ...
  • Install dependencies:
pip install git+https://github.com/facebookresearch/segment-anything-2.git
  • Convert all Segment Anything models to ONNX format:
bash convert_all_meta_sam2.sh
  • Inference using the exported ONNX model (only image input is supported for now):
python -m samexporter.inference \
    --encoder_model output_models/sam2_hiera_tiny.encoder.onnx \
    --decoder_model output_models/sam2_hiera_tiny.decoder.onnx \
    --image images/plants.png \
    --prompt images/truck_prompt_2.json \
    --output output_images/plants_prompt_2_sam2.png \
    --sam_variant sam2 \
    --show

truck_sam2

Tips

  • Use "quantized" models for faster inference and smaller model size. However, the accuracy may be lower than the original models.
  • SAM-B is the most lightweight model, but it has the lowest accuracy. SAM-H is the most accurate model, but it has the largest model size. SAM-M is a good trade-off between accuracy and model size.

AnyLabeling

This package was originally developed for auto labeling feature in AnyLabeling project. However, you can use it for other purposes.

AnyLabeling

License

This project is licensed under the MIT License - see the LICENSE file for details.

References

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

samexporter-0.4.0-py3-none-any.whl (18.2 kB view details)

Uploaded Python 3

File details

Details for the file samexporter-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: samexporter-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 18.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.11.9

File hashes

Hashes for samexporter-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 af89d22e526f5cbf57cc7981044de7dab8f2d3170240a27bb176b9827cad0b3b
MD5 1a92d9759c0668f9335a7c44ccf15e14
BLAKE2b-256 23a135e5c83c2946a8eba6d2682c91ce963e06b2a1df9f5fac162b45e9e07140

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page