A napari plugin for interactive SAM3 segmentation with text, points, boxes, exemplar prompts, refinement, and 3D/video-like propagation.
Project description
napari-sam3-assistant
napari-sam3-assistant is a napari widget for running Meta SAM 3 segmentation workflows from napari image, points, shapes, labels, and text inputs.
The plugin focuses on task-based segmentation workflows:
- 2D segmentation with text, box, point, and mask-style prompts
- 3D stack / video-like propagation from prompts on a selected slice or frame
- exemplar segmentation from Shapes ROI boxes
- text-based concept segmentation
- refinement with positive and negative point prompts
SAM 3 is not bundled with this plugin. Install the SAM 3 backend and download the SAM 3 model files separately from Meta's Hugging Face repository.
Status
This project is under active development. The current widget supports local SAM 3 model loading, napari prompt collection, background execution, and writing results back to napari layers.
Requirements
- Python
>=3.12 - napari
>=0.5 - SAM 3 Python package importable as
sam3 - PyTorch and torchvision installed for your platform
- A local SAM 3 checkpoint directory containing:
config.jsonprocessor_config.json- one weight file such as
sam3.ptormodel.safetensorsGPU use requires a PyTorch / torchvision / SAM 3 stack that is compatible with your GPU architecture. If CUDA kernels are not available for the device, select CPU in the widget.
Setup
Setup has three parts:
- Install the SAM 3 backend.
- Download the SAM 3 model files from Hugging Face and configure the model path.
- Install this napari plugin.
1. Install SAM 3
Create and activate an environment:
conda create -n napari-sam3 python=3.12
conda activate napari-sam3
Install PyTorch and torchvision for your platform. Use the official PyTorch selector for the current command:
pip install torch torchvision
Install SAM 3:
git clone https://github.com/facebookresearch/sam3.git
cd sam3
pip install -e .
2. Download SAM 3 Weights
This plugin does not ship with SAM 3 weights or model configuration files. Download them from the official Hugging Face repository:
https://huggingface.co/facebook/sam3
The repository is gated.
- Sign in to your Hugging Face account.
- Open the facebook/sam3 model page.
- Request or accept access to the repository.
- Once access is approved, open the Files and versions tab.
- Download the required model and configuration files directly from the website.
A reference screenshot of the file list is shown below:
Expected model directory:
~/models/sam3/
config.json
processor_config.json
sam3.pt
model.safetensors is also supported as a weight file. Depending on the Hugging Face layout, the directory may also contain tokenizer files such as tokenizer.json, vocab.json, and merges.txt.
Keep all downloaded model files together in one directory. In the plugin widget, click Browse, select that directory, then click Validate.
For this project, a local folder such as the following is acceptable:
~/Projects/napari/sam3/model
Note that the upstream SAM 3 source repository does not ship with a model/ folder by default. In this project, sam3/model is a user-created local directory used to store downloaded SAM 3 weights and configuration files.
If you created sam3/model yourself and placed the downloaded Hugging Face files there, that is a valid setup. The important requirement is simply that all required SAM 3 files stay together in one readable directory and that you select that directory in the widget.
The selected model directory is remembered by the widget.
3. Install napari-sam3-assistant
Install this plugin:
git clone https://github.com/wulinteousa2-hash/napari-sam3-assistant
cd napari-sam3-assistant
pip install -e .
Start napari:
napari
Open the widget from:
Plugins > SAM3 Assistant
Basic Workflow
- Open an image in napari.
- Open
Plugins > SAM3 Assistant. - Select the image in
Napari Layers > Image. - Select a task.
- Create a prompt layer if the task needs one.
- Click
Run Preview. - Inspect
SAM3 preview labels,SAM3 preview masks, orSAM3 preview boxes. - Click
Save Result as Labelsto keep the result.
Use Clear Preview to remove generated preview layers without deleting prompts or saved labels.
Tasks
Text Segmentation
Use text to segment all matching instances of a concept.
Workflow:
- Set
TasktoText segmentation. - Leave
Prompt toolasText only. - Enter a short phrase, for example:
cell
nucleus
myelin
myelin sheath
- Click
Run Preview.
No prompt layer is needed for text segmentation. Create Prompt Layer is not required.
Text prompts usually work better as short noun phrases than instructions. Prefer myelin sheath over segment all the myelin rings.
If the result says objects=0, SAM 3 ran but did not return masks above threshold.
2D Segmentation With Boxes
Use boxes to identify the target object or concept.
Workflow:
- Set
Taskto2D segmentation. - Set
Prompt tooltoBox. - Click
Create Prompt Layer. - Draw one or more rectangles in the
SAM3 boxesShapes layer. - Click
Run Preview.
The output appears in preview layers.
Exemplar Segmentation
Use example ROIs to segment similar objects.
Workflow:
- Set
TasktoExemplar segmentation. - Set
Prompt tooltoBox. - Click
Create Prompt Layer. - Draw boxes around one or more example objects.
- Click
Run Preview.
The local SAM 3 image API exposes visual exemplars through geometric box prompts. The plugin stores ROI metadata, but inference currently passes exemplar ROIs as SAM 3 visual box prompts.
Refinement With Positive and Negative Points
Use points to correct a result.
Workflow:
- Set
TasktoRefinement. - Set
Prompt tooltoPoints (+/-). - Click
Create Prompt Layer. - Choose
Positiveand add points on regions to include. - Choose
Negativeand add points on regions to exclude. - Click
Run Preview.
This is useful after a text, box, or exemplar preview is close but not correct.
Labels Mask Prompt
Use a napari Labels layer as a mask-style prompt.
Workflow:
- Set a task that supports mask prompts.
- Set
Prompt tooltoLabels mask. - Click
Create Prompt Layer. - Paint non-zero pixels in
SAM3 mask prompt. - Click
Run Preview.
3D Stack / Video Propagation
Treat a stack as video-like data and propagate a prompt through frames or slices.
Workflow:
- Open a stack in napari.
- Set
Taskto3D/video propagation. - Select the target frame or slice in napari.
- Create a prompt layer and add prompts on that frame.
- Choose propagation direction:
bothforwardbackward
- Click
Run PrevieworPropagate Stack/Video.
Preview output is written to:
SAM3 propagated preview labels
Saved output is written to:
SAM3 saved propagated labels
The current SAM 3 video predictor backend is CUDA-only. CPU mode is supported for 2D/image workflows, not 3D/video propagation.
Channel Axis
Channel axis tells the plugin which data axis is color/channel.
Default:
-1
Use -1 for grayscale images and normal RGB/RGBA images. The plugin auto-detects trailing RGB/RGBA axes of size 3 or 4.
Examples:
(H, W) -> -1
(H, W, 3) -> -1
(H, W, 4) -> -1
(Z, H, W) -> -1
(C, H, W) -> 0
(Z, C, H, W) -> 1
(T, C, H, W) -> 1
(Z, H, W, C) -> 3
Leave it at -1 unless your image has an explicit multi-channel microscopy dimension.
Output Layers
Preview layers:
SAM3 preview labels
SAM3 preview masks
SAM3 preview boxes
SAM3 propagated preview labels
Saved layers:
SAM3 saved labels
SAM3 saved propagated labels
Buttons:
Validate: check the selected SAM 3 model directory.Load Image Model: load the 2D/image model.Load 3D/Video Model: load the video propagation model.Run Preview: run the selected task.Clear Preview: remove generated preview layers only.Save Result as Labels: copy preview labels into saved labels.Cancel: stop a running worker.Unload: unload the SAM3 model from memory.
ARM64, CUDA, and DGX Spark
For ARM64 systems such as NVIDIA DGX Spark / GB10:
- Use Python 3.12 or newer.
- Keep the NVIDIA driver and CUDA stack current.
- Install a PyTorch/torchvision build that supports your GPU architecture.
- Use
CPUmode for reliable 2D execution if CUDA kernels are unavailable. - Use explicit
CUDAonly when testing a compatible GPU build.
Check PyTorch GPU support:
python - <<'PY'
import torch
print("torch:", torch.__version__)
print("torch cuda runtime:", torch.version.cuda)
print("cuda available:", torch.cuda.is_available())
if torch.cuda.is_available():
print("device:", torch.cuda.get_device_name(0))
print("capability:", torch.cuda.get_device_capability(0))
print("arch list:", torch.cuda.get_arch_list())
PY
GB10 reports compute capability 12.1 (sm_121). If your PyTorch build does not include compatible kernels, you may see:
CUDA error: no kernel image is available for execution on the device
nvrtc: error: invalid value for --gpu-architecture
The plugin does not compile PyTorch, torchvision, or SAM 3 CUDA extensions.
Troubleshooting
No mask appears and status says objects=0
SAM 3 returned no detections above threshold. Try:
- a shorter text prompt
- a more common concept phrase
- box or exemplar prompts
- CPU mode if the CUDA path is unstable
CUDA kernel image error
Error:
CUDA error: no kernel image is available for execution on the device
The GPU is visible, but at least one required CUDA kernel was not built for the device architecture. Use CPU, or install compatible PyTorch/torchvision/SAM 3 builds.
Invalid GPU architecture
Error:
nvrtc: error: invalid value for --gpu-architecture
The installed PyTorch CUDA runtime cannot compile for the detected GPU. Use CPU or install a build that supports the GPU.
BFloat16 conversion errors
The plugin converts SAM3 bfloat16 outputs to float32 before writing NumPy-backed napari layers. If you still see dtype errors, restart napari after changing device mode and run again.
Text prompt creates no layer
That is expected. Text segmentation does not need a prompt layer. Enter text and click Run Preview.
Development
Install in editable mode:
pip install -e .
Run tests:
PYTHONPATH=src pytest -q
The test suite covers coordinate mapping, prompt collection, adapter utility behavior, and static widget UI checks. It does not download SAM 3 weights.
References
- SAM 3 repository: https://github.com/facebookresearch/sam3
- SAM 3 model files: https://huggingface.co/facebook/sam3
- PyTorch installation selector: https://pytorch.org/get-started/locally/
License
MIT. See the project license file.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file napari_sam3_assistant-1.0.1.tar.gz.
File metadata
- Download URL: napari_sam3_assistant-1.0.1.tar.gz
- Upload date:
- Size: 33.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
61112270d8a6cdae236b1435b420d55fcecda8cc64a27d4a6598d20dd405015e
|
|
| MD5 |
6fa4d3161ecf7688a64811173330775b
|
|
| BLAKE2b-256 |
9a063ef283d456a25d8bd4653ec4be1c1b53c53abfa6ae9649461bb5bb665c9c
|
File details
Details for the file napari_sam3_assistant-1.0.1-py3-none-any.whl.
File metadata
- Download URL: napari_sam3_assistant-1.0.1-py3-none-any.whl
- Upload date:
- Size: 31.0 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.12.13
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0a792dd34bb47514bfa770e5ce2bb982cb49cdc2727ccee4336dfae8e6661375
|
|
| MD5 |
f1e99030493ff1cb9a5ef4bba820131a
|
|
| BLAKE2b-256 |
e3547adb3f81ab2c387f40288b312e642cd6a180cae03845d2f1c6ed4a9dbcfe
|