Skip to main content

Python bindings for Atlas zimg (nanobind)

Project description

zimg

Python bindings for Atlas’ image I/O + processing library (built with nanobind).

The main entry point is ZImg, a multidimensional image container designed for fast CPU array interop, region-of-interest (ROI) access, and common microscopy / scientific image formats.

Highlights

  • Multidimensional images: C, Z, Y, X, and T (time is represented as a list of arrays in Python).
  • Read image metadata without loading full pixel data (ZImg.readImgInfos, ZImg.readImgInfo).
  • Read full images or ROIs via ZImgRegion (end coordinate is exclusive).
  • Fast CPU array interop via ZImg.data / ZImg.to_arrays() (NumPy / Torch / TensorFlow / JAX / Array API / memoryview).
  • Zero-copy array wrapping when possible (CPU C-contiguous + layout="CZYX"), with copy_if_needed to enforce or relax this.
  • Sub-block / tile access (ZImg.readSubBlockLists, ZImg.readSubBlock) for formats that support it.
  • Streaming writers via Python-implemented providers (ZImg.writeImg + ZImgSliceProvider / ZImgBlockProvider).
  • Save images (ZImg.save) with optional ZImgWriteParameters (compression, etc.).
  • neuTube tracing / skeletonization workers exposed directly to Python: ZNeutubeSkeletonize, ZNeutubeAutoTrace, ZNeutubeBlockedAutoTrace, ZSwcSubtract, and TraceConfig.
  • Embedded neuTube JSON presets available directly in Python via zimg.neutube_json.

Installation

  • Requires Python >= 3.12.
  • Prebuilt wheels target the package's configured Stable ABI floor, so a wheel built with newer regular CPython still targets the same runtime floor.
  • Requires NumPy (installed automatically by pip install zimg).
  • If a prebuilt wheel is available for your platform: pip install zimg.
  • If pip falls back to building from source, see “Building from source” below.

Quickstart

import zimg

img = zimg.ZImg("example.ome.tif")
arr0 = img.to_arrays("numpy")[0]  # t = 0
print(arr0.shape)  # (C, Z, Y, X)
print(arr0.dtype)

img.save("out.tif")

neuTube processing

The package exposes neuTube processing workers directly. They can be configured from Python with setter methods, or loaded/saved as task files via loadTask(...) / saveTask(...).

Available classes:

  • ZNeutubeSkeletonize: binary image to SWC skeletonization.
  • ZNeutubeAutoTrace: whole-volume auto tracing on a selected channel / timepoint.
  • ZNeutubeBlockedAutoTrace: blocked auto tracing for large datasets.
  • ZSwcSubtract: subtract one or more SWCs from an input SWC.
  • TraceConfig: algorithm-override struct for tracing score / behavior knobs.

ZImgSource input model

These tracing workers accept ZImgSource, not just a plain filename. This matches Atlas' native image-source model and supports:

  • single files
  • file lists
  • scene selection
  • ROI selection
  • explicit format hints

For simple single-file use, setInputImagePath(...) is still available.

import zimg

source = zimg.ZImgSource("signal.ome.tif")
source.scene = 0
source.region = zimg.ZImgRegion((0, 0, 0, 0, 0), (-1, -1, -1, 1, 1))

Embedded config presets

Current neuTube tracing / skeletonization presets are embedded directly into the Python package as zimg.neutube_json.

  • Parsed JSON presets are exposed as Python dict values such as zimg.neutube_json.SKELETONIZE, zimg.neutube_json.FLYEM_SKELETONIZE, zimg.neutube_json.TRACE_CONFIG, and zimg.neutube_json.TRACE_CONFIG_BIOCYTIN.
  • JSON text is also available through get_preset_text(...).
  • write_preset(...) can materialize a preset to disk when a file is needed.

The tracing/skeletonization workers accept either:

  • a config file path, via setSkeletonizeConfigPath(...) or setTraceConfigPath(...)
  • an inline Python dict, via setSkeletonizeConfig(...) or setTraceConfig(...)

Skeletonize a binary image to SWC

import zimg

proc = zimg.ZNeutubeSkeletonize()
proc.setInputImageSource(zimg.ZImgSource("binary_mask.tif"))
proc.setSkeletonizeConfig(zimg.neutube_json.SKELETONIZE)
proc.setOutputSwcPath("binary_mask.swc")
proc.run()

print(proc.hasResult(), proc.outputSwcPath())

Auto trace a signal volume

import zimg

proc = zimg.ZNeutubeAutoTrace()
proc.setInputImageSource(zimg.ZImgSource("signal.ome.tif"))
proc.setSelectedChannelTime(0, 0)
proc.setTraceConfig(zimg.neutube_json.TRACE_CONFIG)
proc.setOutputSwcPath("autotrace.swc")

# Optional: override selected tracing parameters only when you already know
# which fields you want to change for your dataset.
#
# overrides = zimg.TraceConfig()
# overrides.minAutoScore = ...
# overrides.seedMethod = ...
# proc.setAlgoConfigOverrides(overrides)

proc.run()

Blocked auto trace for large datasets

ZNeutubeBlockedAutoTrace is intended for larger datasets where tracing should run block-by-block instead of materializing the whole volume at once. When possible, it derives metadata such as dataset shape and z-scale directly from the provided ZImgSource.

It also exposes advanced tiling controls such as block core size and halo, but those values are workload-specific. If you do not already have tuned settings for a dataset, start with the worker defaults instead of guessing block sizes in Python.

import zimg

proc = zimg.ZNeutubeBlockedAutoTrace()
proc.setInputImageSource(zimg.ZImgSource("large_signal.ome.zarr"))
proc.setSelectedChannelTime(0, 0)
proc.setSignalDownsampleRatio([2, 2, 1])
proc.setTraceConfig(zimg.neutube_json.TRACE_CONFIG)
proc.setOutputSwcPath("blocked_autotrace.swc")
proc.setOutputSessionDir("blocked_autotrace_session")
proc.run()

Subtract SWCs

import zimg

proc = zimg.ZSwcSubtract()
proc.setInputSwcFilename("full_tree.swc")
proc.setSubtractSwcFilenames(["artifact_1.swc", "artifact_2.swc"])
proc.setOutputSwcFilename("cleaned_tree.swc")
proc.run()

Reading a region (ROI)

ZImgRegion is defined by (x, y, z, c, t) start/end coordinates, where end is not included. Use -1 for any end component to mean “to the end” of that dimension.

import zimg

region = zimg.ZImgRegion((0, 0, 0, 0, 0), (256, 256, 64, -1, 1))
img = zimg.ZImg("big.ome.tif", region=region)
arr0 = img.to_arrays("numpy")[0]

Creating from arrays

The canonical layout is CZYX. Pass layout= if your arrays use a different dimension order. For CPU arrays, ZImg will wrap zero-copy when possible:

  • If layout="CZYX" and the input is CPU C-contiguous: typically zero-copy.
  • Otherwise: it will copy unless copy_if_needed=False (then it raises).
import numpy as np
import zimg

arr = np.zeros((1, 1, 64, 64), dtype=np.uint16)  # C, Z, Y, X
img = zimg.ZImg(arr, layout="CZYX", copy_if_needed=False)  # enforce zero-copy
img.save("zeros.tif")

ZImg.to_arrays(framework="auto") will return CPU arrays in the requested framework. With framework="auto", it mirrors the input framework if the image was created zero-copy from arrays; otherwise it returns NumPy arrays.

Sub-block / tiled IO

Some formats store images as sub-blocks / tiles (e.g. pyramid levels, chunked layouts). zimg exposes these via:

  • ZImg.readSubBlockLists(...) → per-scene list of NumPy int64 arrays describing sub-blocks. Each sub-block record contains (t, x, y, z, width, height, depth, xRatio, yRatio, zRatio).
  • ZImg.readSubBlock(...) → read an individual sub-block by (scene, blockIndex).

Streaming writes (slice/block providers)

ZImg.writeImg(...) can write from a provider instead of requiring a full in-memory ZImg. Implement the provider interface in Python:

  • ZImgSliceProvider: implement imgInfo(), slice(z, t), allSlices(t), wholeImg().
  • ZImgBlockProvider: implement imgInfo(), numBlocks(), blockCoord(i), block(i), wholeImg().

This is useful for very large datasets or pipelines that generate data incrementally.

Compression parameters

ZImgWriteParameters exposes common compression knobs (availability depends on the file format):

  • compression (see zimg.Compression)
  • zlibCompressionLevel
  • jpegQuality, jpegProgressive, jpegChrominanceSubsampling, jpegAccurateDCT
  • jpegXRQuality

File formats

Supported file formats depend on how the wheel/source was built. The zimg.FileFormat enum includes:

  • Tiff, OmeTiff, Png, Jpeg, JpegXR
  • ZeissCZI, ZeissLsm, Leica, BioFormats
  • Vaa3DRaw, HDF5Img, MetaImage, ITKImage
  • FreeImage (optional; omitted when built with -DZIMG_DISABLE_FREEIMAGE=ON)

BioFormats requires OME's bioformats_package.jar, which is not installed by default. Configure an existing jar with zimg.bioformats.configure("/path/to/bioformats_package.jar"), or call zimg.bioformats.download() to fetch the pinned runtime jar and enable it for the current Python process. Always verify availability before reading Bio-Formats-only files:

import zimg

if not zimg.bioformats.is_available():
    # Optional if zimg auto-detects the right Java from JAVA_HOME or PATH.
    # zimg.bioformats.configure_java("/absolute/path/to/java")
    zimg.bioformats.configure("/absolute/path/to/bioformats_package.jar")
    # Or let zimg fetch the pinned jar instead:
    # zimg.bioformats.download()

zimg.bioformats.ensure_available()

img = zimg.ZImg("example.bif", format=zimg.FileFormat.BioFormats)
print(img.to_arrays("numpy")[0].shape)

ensure_available() reports the exact Java executable, atlas-bioformats-bridge.jar, and bioformats_package.jar selected for this process. If no Bio-Formats jar is configured, native formats continue to work and Bio-Formats-backed readers report unavailable.

The wheel does not ship Java. At import time, the Python package checks JAVA_HOME/bin/java first, then falls back to java from PATH if JAVA_HOME is unset or unsuitable. If that auto-detected Java is correct, no extra Java configuration is needed. If you need a specific Java executable, call zimg.bioformats.configure_java("/absolute/path/to/java") before the first Bio-Formats read/probe. The explicit path replaces the detected Java executable until the bridge process starts; after that, the runtime rejects Java path changes so the printed path always matches the process in use. The bridge jar is compiled for Java 11, so Bio-Formats reads require Java 11 or newer. Missing or older Java runtimes surface as Python exceptions from the Bio-Formats read/probe call; native formats do not start Java.

Notes / limitations

  • ZImg.to_arrays() and ZImg.data expose CPU-backed arrays that reference the ZImg buffers (the Python arrays keep the parent ZImg alive).
  • GPU arrays are not supported.

Building from source

This package reuses Atlas’ native CMake build. Building from source generally requires the same native dependencies as Atlas (compiler toolchain, Qt, and the Atlas third-party libraries built/configured).

For source builds, use regular CPython at or above the minimum supported version (not free-threaded Python). The builder itself may be newer than the wheel ABI floor, but the wheel target stays on that configured abi3 floor.

From the repo root:

cd python/zimg
python -m pip install .

If you build Atlas via conda recipes, ensure the expected third-party artifacts are present (e.g., src/3rdparty/build/) before building this wheel.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

If you're not sure about the file name format, learn more about wheel file names.

zimg-1.0.10-cp312-abi3-win_amd64.whl (60.2 MB view details)

Uploaded CPython 3.12+Windows x86-64

zimg-1.0.10-cp312-abi3-manylinux_2_35_x86_64.whl (84.9 MB view details)

Uploaded CPython 3.12+manylinux: glibc 2.35+ x86-64

zimg-1.0.10-cp312-abi3-macosx_12_0_universal2.whl (79.5 MB view details)

Uploaded CPython 3.12+macOS 12.0+ universal2 (ARM64, x86-64)

File details

Details for the file zimg-1.0.10-cp312-abi3-win_amd64.whl.

File metadata

  • Download URL: zimg-1.0.10-cp312-abi3-win_amd64.whl
  • Upload date:
  • Size: 60.2 MB
  • Tags: CPython 3.12+, Windows x86-64
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.13

File hashes

Hashes for zimg-1.0.10-cp312-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 f9d961f1cb13a9763d84186acdb4f6fb67a855d33180daf4d631772dcd494cb9
MD5 4c6aaa060cf2739f1c79b0cb9c56ce72
BLAKE2b-256 0459d52604775db43a1cce62bfc3e18a019ce841d2b2e5ba6c635953036041aa

See more details on using hashes here.

File details

Details for the file zimg-1.0.10-cp312-abi3-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for zimg-1.0.10-cp312-abi3-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 59d5edaa9fd4cf0f42f0f0a951a1340e34e65a31b11dd945c09b61cda7a1c96a
MD5 932d46fcfab22ea8e155d8aa78595d1d
BLAKE2b-256 9b93bf1009793e248dcedef8a38d6077c3b202ba54dc5c54c3304a26cf3da78d

See more details on using hashes here.

File details

Details for the file zimg-1.0.10-cp312-abi3-macosx_12_0_universal2.whl.

File metadata

  • Download URL: zimg-1.0.10-cp312-abi3-macosx_12_0_universal2.whl
  • Upload date:
  • Size: 79.5 MB
  • Tags: CPython 3.12+, macOS 12.0+ universal2 (ARM64, x86-64)
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.11

File hashes

Hashes for zimg-1.0.10-cp312-abi3-macosx_12_0_universal2.whl
Algorithm Hash digest
SHA256 8e76b808be8ed5cc80c28d32ee27c061c3e57d5a481dc0d65ad4257f7ea0fcf3
MD5 73b32a93a46c3de7f0e0d64bfe638d2b
BLAKE2b-256 4e6d744ab0ae92ff1db054b9a99b7a7c47f986e6fbcbd88e9786f18a1c7d2831

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page