A BioIO reader plugin for reading Zarr files in the OME format.
Project description
bioio-ome-zarr
A BioIO reader plugin for reading OME ZARR images using ome-zarr
Documentation
See the full documentation on our GitHub pages site - the generic use and installation instructions there will work for this package.
Information about the base reader this package relies on can be found in the bioio-base repository here
Installation
Stable Release: pip install bioio-ome-zarr
Development Head: pip install git+https://github.com/bioio-devs/bioio-ome-zarr.git
Example Usage (see full documentation for more examples)
Install bioio-ome-zarr alongside bioio:
pip install bioio bioio-ome-zarr
This example shows a simple use case for just accessing the pixel data of the image
by explicitly passing this Reader into the BioImage. Passing the Reader into
the BioImage instance is optional as bioio will automatically detect installed
plug-ins and auto-select the most recently installed plug-in that supports the file
passed in.
from bioio import BioImage
import bioio_ome_zarr
img = BioImage("my_file.zarr", reader=bioio_ome_zarr.Reader)
img.data
Reading from AWS S3
To read from private S3 buckets, credentials must be configured. Public buckets can be accessed without credentials.
from bioio import BioImage
path = "https://allencell.s3.amazonaws.com/aics/nuc-morph-dataset/hipsc_fov_nuclei_timelapse_dataset/hipsc_fov_nuclei_timelapse_data_used_for_analysis/baseline_colonies_fov_timelapse_dataset/20200323_09_small/raw.ome.zarr"
image = BioImage(path)
print(image.get_image_dask_data())
Writing OME-Zarr Stores
The OMEZarrWriter can write both Zarr v2 (NGFF 0.4) and Zarr v3 (NGFF 0.5) formats.
basic writer example (2D YX)
from bioio_ome_zarr.writers import OMEZarrWriter
import numpy as np
# Minimal 2D example (Y, X)
data = np.random.randint(0, 255, size=(64, 64), dtype=np.uint8)
writer = OMEZarrWriter(
store="basic.zarr",
level_shapes=(64, 64), # (Y, X)
dtype=data.dtype,
)
# Write the data to the store
writer.write_full_volume(data)
5D (TCZYX), with one extra resolution level
from bioio_ome_zarr.writers import OMEZarrWriter, Channel
import numpy as np
level_shapes = [
(2, 3, 4, 256, 256), # L0 full res
(2, 3, 4, 128, 128), # L1 downsampled Y/X by 2
]
data = np.random.randint(0, 255, size=level_shapes[0], dtype=np.uint8)
channels = [Channel(label=f"c{i}", color="FF0000") for i in range(data.shape[1])]
writer = OMEZarrWriter(
store="output.zarr",
level_shapes=level_shapes,
dtype=data.dtype,
zarr_format=3, # 2 for Zarr v2
channels=channels,
axes_names=["t", "c", "z", "y", "x"],
axes_types=["time", "channel", "space", "space", "space"],
axes_units=[None, None, "micrometer", "micrometer", "micrometer"],
)
writer.write_full_volume(data)
Full writer parameters and API
| Parameter | Type | Description | |
|---|---|---|---|
| store | str or zarr.storage.StoreLike |
Filesystem path, fsspec URL, or Store-like for the root group. | |
| level_shapes | Sequence[int] or Sequence[Sequence[int]] |
Either a single N‑D shape (one level) or an explicit per‑level list of shapes (level 0 first). Examples above. | |
| dtype | np.dtype or str |
On‑disk dtype (e.g., uint8, uint16). |
|
| chunk_shape | Sequence[int] or Sequence[Sequence[int]] or None |
Chunk shape: single (applied to all levels) or per‑level. If None, a ≈16 MiB chunk is suggested per level via multiscale_chunk_size_from_memory_target. |
|
| shard_shape | Sequence[int] or Sequence[Sequence[int]] or None |
Zarr v3 only. Single or per‑level shard shapes. Each shard dim must be an integer multiple of the corresponding chunk dim. | |
| compressor | BloscCodec (v3) or numcodecs.abc.Codec (v2) or None |
Compression codec. Defaults to zstd + bitshuffle. | |
| zarr_format | Literal[2,3] |
Target Zarr format – 2 (NGFF 0.4) or 3 (NGFF 0.5). Default 3. |
|
| image_name | str or None |
Name used in multiscales metadata. Default: "Image". |
|
| channels | list[Channel] or None |
OMERO‑style channel metadata. | |
| rdefs | dict or None |
OMERO rendering defaults. | |
| creator_info | dict or None |
Optional creator block (NGFF 0.5). | |
| root_transform | dict[str, Any] or None |
Optional transform placed at multiscale root. | |
| axes_names | list[str] or None |
Axis names; defaults to the last N of ["t","c","z","y","x"]. |
|
| axes_types | list[str] or None |
Axis types; defaults to ["time","channel","space", …]. |
|
| axes_units | list[str]orNone |
Physical units per axis. | |
| physical_pixel_size | list[float] or None |
Level‑0 physical scale per axis. |
Methods
-
write_full_volume(input_data: np.ndarray | dask.array.Array) -> NoneWrite level‑0 and all declared pyramid levels. NumPy arrays are wrapped into Dask using level‑0 chunks. -
write_timepoints(data: np.ndarray | dask.array.Array, *, start_T_src=0, start_T_dest=0, total_T: int | None = None) -> NoneStream along the T axis fromdatainto the store. Spatial axes are downsampled for lower levels; T/C are preserved. -
preview_metadata() -> dict[str, Any]Returns the NGFF metadata dict(s) that would be written (no IO).
Writing a full volume (NumPy or Dask)
# NumPy (wrapped automatically)
writer.write_full_volume(data)
# Or pass an explicit Dask array
import dask.array as da
writer.write_full_volume(da.from_array(data, chunks=(1, 1, 1, 64, 64)))
Writing timepoints in batches (streaming along T)
# Suppose your writer axes include "T"; write timepoints in flexible batches
from bioio import BioImage
import dask.array as da
bioimg = BioImage("/path/to/any/bioimage")
data = bioimg.get_image_dask_data()
# Write the entire timeseries at once
writer.write_timepoints(data)
# Write in 5-timepoint batches
for t in range(0, data.shape[0], 5):
writer.write_timepoints(
data,
start_T_src=t,
start_T_dest=t,
total_T=5,
)
# Write source timepoints [10:20] into destination positions [50:60]
writer.write_timepoints(
data,
start_T_src=10,
start_T_dest=50,
total_T=10,
)
Custom chunking per level
# Provide one chunk shape per level; must match ndim
chunk_shape = (
(1, 1, 1, 64, 64), # level 0
(1, 1, 1, 32, 32), # level 1
)
writer = OMEZarrWriter(
store="custom_chunks.zarr",
level_shapes=[(1, 1, 2, 256, 256), (1, 1, 2, 128, 128)],
dtype="uint16",
zarr_format=3,
chunk_shape=chunk_shape,
)
# Example data matching the declared shape
arr = np.random.randint(0, 65535, size=(1, 1, 2, 256, 256), dtype=np.uint16)
writer.write_full_volume(arr)
Sharded arrays (v3 only)
from zarr.codecs import BloscCodec, BloscShuffle
writer = OMEZarrWriter(
store="sharded_v3.zarr",
level_shapes=[(1, 1, 16, 1024, 1024), (1, 1, 16, 512, 512)],
dtype="uint8",
zarr_format=3,
chunk_shape=[(1, 1, 1, 128, 128),(1, 1, 1, 128, 128)],
shard_shape=[(1, 1, 1, 256, 256), (1, 1, 1, 256, 256)],
compressor=BloscCodec(cname="zstd", clevel=5, shuffle=BloscShuffle.bitshuffle),
)
writer.write_full_volume(
np.random.randint(0, 255, size=(1, 1, 16, 1024, 1024), dtype=np.uint8)
)
Targeting Zarr v2 explicitly (NGFF 0.4)
import numcodecs
writer = OMEZarrWriter(
store="target_v2.zarr",
level_shapes=[(2, 1, 4, 256, 256), (2, 1, 4, 128, 128)],
dtype="uint8",
zarr_format=2, # write NGFF 0.4
compressor=numcodecs.Blosc(
cname="zstd", clevel=3, shuffle=numcodecs.Blosc.BITSHUFFLE
),
)
writer.write_full_volume(
np.random.randint(0, 255, size=(2, 1, 4, 256, 256), dtype=np.uint8)
)
Writing to S3 (or any fsspec URL)
# Requires creds for private buckets; public can be anonymous
writer = OMEZarrWriter(
store="s3://my-bucket/path/to/out.zarr",
level_shapes=(1, 2, 8, 2048, 2048), # single level (TCZYX), no pyramid
dtype="uint16",
zarr_format=3,
)
writer.write_full_volume(
np.random.randint(0, 65535, size=(1, 2, 8, 2048, 2048), dtype=np.uint16)
)
Writer Utility Functions
multiscale_chunk_size_from_memory_target(level_shapes, dtype, memory_target) -> list[tuple[int, ...]]
Suggests per-level chunk shapes that each fit within a fixed byte budget.
- Works for any ndim (2…5).
- prioritizes the highest-index axis first (grow X, then Y, then Z, then C, then T).
Example: 16 MiB budget on large pyramids (rightmost-axis first)
from bioio_ome_zarr.writers.utils import multiscale_chunk_size_from_memory_target
# 4D (C, Z, Y, X) across 5 levels
level_shapes = [
(8, 64, 4096, 4096),
(8, 64, 2048, 2048),
(8, 64, 1024, 1024),
(8, 64, 512, 512),
(8, 64, 256, 256),
]
# 16 MiB target
chunks = multiscale_chunk_size_from_memory_target(level_shapes, "uint16", 16 << 20)
chunks = [
(1, 1, 2048, 4096),
(1, 2, 2048, 2048),
(1, 8, 1024, 1024),
(1, 32, 512, 512),
(2, 64, 256, 256),
]
add_zarr_level(existing_zarr, scale_factors, compressor=None, t_batch=4) -> None
Appends a new resolution level to an existing v2 OME-Zarr store, writing in time (T) batches.
scale_factors: per-axis scale relative to the previous highest level (tuple of length 5 forT, C, Z, Y, X).- Automatically determines appropriate chunk size using
multiscale_chunk_size_from_memory_target. - Updates the
multiscalesmetadata block with the new level's path and transformations. - Example:
from bioio_ome_zarr.writers import add_zarr_level
add_zarr_level(
"my_existing.zarr",
scale_factors=(1, 1, 0.5, 0.5, 0.5),
compressor=numcodecs.Blosc(cname="zstd", clevel=3, shuffle=numcodecs.Blosc.BITSHUFFLE)
)
Using Config Presets
Config presets make it easy to get started with OMEZarrWriter without needing to know all of its options. They inspect your input data and return a configuration dictionary that you can pass directly into the writer.
Visualization preset
The visualization preset (get_default_config_for_viz) creates a multiscale pyramid (full resolution plus downsampled levels along Y/X) suitable for interactive browsing.
import numpy as np
from bioio_ome_zarr.writers import (
OMEZarrWriter,
get_default_config_for_viz,
)
data = np.zeros((1, 1, 4, 64, 64), dtype="uint16")
cfg = get_default_config_for_viz(data)
writer = OMEZarrWriter("output.zarr", **cfg)
writer.write_full_volume(data)
This produces a Zarr store with the original data and additional lower-resolution levels for visualization.
Machine learning preset
The ML preset (get_default_config_for_ml) writes only the full-resolution data, chunked to optimize for patch-wise access often used in training pipelines.
Editing Zarrs
bioio-ome-zarr provides a utility for editing metadata of an existing OME-Zarr store in-place without rewriting image data.
The function:
from bioio_ome_zarr.writer import edit_metadata
allows you to modify common metadata fields such as:
- Image name
- Channel metadata
- Rendering definitions (rdefs)
- Axis names, types, and units
- Physical pixel size (with automatic pyramid propagation)
- Root-level coordinate transforms
- Creator / provenance information (NGFF v0.5)
Changing Axis Metadata (e.g. ZTX → TYX)
Sometimes an image was written with incorrect or placeholder axis metadata. For example, you may have a dataset whose axes were labeled as:
Z, T, X
but the data actually represents:
T, Y, X
You can correct the axis metadata in-place:
edit_metadata(
"my_image.ome.zarr",
axes_names=["t", "y", "x"],
axes_types=["time", "space", "space"],
axes_units=["second", "micrometer", "micrometer"],
)
⚠️ Important: This operation updates metadata only.
If the actual array order needs to change (for example, true ZTX data must become TYX data), you must rewrite the array data before updating metadata.
Updating Physical Pixel Size
To change the physical pixel size of the base resolution:
edit_metadata(
"my_image.ome.zarr",
physical_pixel_size=[1.0, 1.0, 0.5, 0.108, 0.108],
)
This will:
- Update the base resolution scale
- Automatically propagate scale changes to all pyramid levels
- Preserve relative downsampling ratios
Updating Channel Metadata
Channel metadata is written into the OMERO metadata block.
from bioio_ome_zarr.metadata import Channel
channels = [
Channel(label="DAPI", color="#0000FF"),
Channel(label="GFP", color="#00FF00"),
]
edit_metadata(
"my_image.ome.zarr",
channels=channels,
)
Setting Creator Metadata (NGFF v0.5)
For NGFF v0.5 stores:
edit_metadata(
"my_image.ome.zarr",
creator_info={
"name": "bioio-ome-zarr",
"version": "3.1.0",
},
)
Notes
- The Zarr store is modified in-place.
- No image data is rewritten/rechunked.
Issues
Click here to view all open issues in bioio-devs organization at once or check this repository's issue tab.
Development
See CONTRIBUTING.md for information related to developing the code.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file bioio_ome_zarr-3.3.0.tar.gz.
File metadata
- Download URL: bioio_ome_zarr-3.3.0.tar.gz
- Upload date:
- Size: 41.6 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
89af3058aa3253b597e41602d711558459e9f896963664a2ac3da8a9b154aa72
|
|
| MD5 |
b08563594472e98de397a79a8c16c928
|
|
| BLAKE2b-256 |
6e111b53888c3645ea021c44734c39099132193f225c4afd3b112ebe81341093
|
Provenance
The following attestation bundles were made for bioio_ome_zarr-3.3.0.tar.gz:
Publisher:
ci.yml on bioio-devs/bioio-ome-zarr
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
bioio_ome_zarr-3.3.0.tar.gz -
Subject digest:
89af3058aa3253b597e41602d711558459e9f896963664a2ac3da8a9b154aa72 - Sigstore transparency entry: 1019055395
- Sigstore integration time:
-
Permalink:
bioio-devs/bioio-ome-zarr@16d62ee2df2d104c88588008dada0997954667fc -
Branch / Tag:
refs/tags/v3.3.0 - Owner: https://github.com/bioio-devs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yml@16d62ee2df2d104c88588008dada0997954667fc -
Trigger Event:
push
-
Statement type:
File details
Details for the file bioio_ome_zarr-3.3.0-py3-none-any.whl.
File metadata
- Download URL: bioio_ome_zarr-3.3.0-py3-none-any.whl
- Upload date:
- Size: 31.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.7
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
a29917531540336bc8e0284b9ed9db42e87d67b880566f98f933d7636634b3cd
|
|
| MD5 |
55c403cb716d60ea28480592718c67db
|
|
| BLAKE2b-256 |
e0ef531f7bf022e6e2b4ca3daea57c5dd10242c6256e00f111d7c49e80895524
|
Provenance
The following attestation bundles were made for bioio_ome_zarr-3.3.0-py3-none-any.whl:
Publisher:
ci.yml on bioio-devs/bioio-ome-zarr
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
bioio_ome_zarr-3.3.0-py3-none-any.whl -
Subject digest:
a29917531540336bc8e0284b9ed9db42e87d67b880566f98f933d7636634b3cd - Sigstore transparency entry: 1019055407
- Sigstore integration time:
-
Permalink:
bioio-devs/bioio-ome-zarr@16d62ee2df2d104c88588008dada0997954667fc -
Branch / Tag:
refs/tags/v3.3.0 - Owner: https://github.com/bioio-devs
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
ci.yml@16d62ee2df2d104c88588008dada0997954667fc -
Trigger Event:
push
-
Statement type: