Unified tool for mesh generation, multiresolution mesh creation, skeletonization, and analysis.
Project description
mesh-n-bone
Unified tool for mesh generation, multiresolution mesh creation, skeletonization, and analysis — all parallelized with Dask.
Produces meshes in the neuroglancer precomputed format for viewing in neuroglancer.
Try it in Colab
The example notebook runs the full pipeline end-to-end (build a zarr volume, generate meshes, view in neuroglancer) without any local setup.
Quick start
git clone https://github.com/janelia-cellmap/mesh-n-bone.git
cd mesh-n-bone
pixi install
# Create a small example zarr volume
pixi run python examples/create_example_volume.py
# Generate meshes and multiresolution output
pixi run mesh-n-bone meshify examples/meshify-config -n 1
# View volume and meshes in neuroglancer
pixi run mesh-n-bone serve examples --zarr data/example.zarr/seg --meshes output/multires
See examples/ for the full walkthrough.
Features
- Meshify — Generate meshes from
.zarr/.n5segmentation volumes via marching cubes, with blockwise processing, chunk assembly, simplification, and optional on-the-fly downsampling - To-Neuroglancer — Convert single-scale meshes into neuroglancer multiresolution Draco-compressed meshes with automatic LOD decimation
- Skeletonize — Extract skeletons from meshes using CGAL mean curvature flow, with pruning, simplification, and metrics
- Analyze — Compute mesh metrics: volume, surface area, curvature, thickness, principal inertia, oriented bounds
Installation
With pixi (recommended)
git clone https://github.com/janelia-cellmap/mesh-n-bone.git
cd mesh-n-bone
pixi install
With pip
pip install mesh-n-bone
Building the CGAL skeletonizer (optional)
The skeletonization module requires a compiled C++ binary. Build it with:
pixi run -e build-cgal build-cgal
This uses a separate pixi environment with CGAL, Boost, and Eigen dependencies.
Usage
All commands are available through the mesh-n-bone CLI:
mesh-n-bone <command> [options]
Commands
meshify — Generate meshes from segmentation volumes
mesh-n-bone meshify CONFIG_PATH -n NUM_WORKERS [--roi begin_z,begin_y,begin_x,end_z,end_y,end_x]
Reads a .zarr or .n5 segmentation volume, runs marching cubes per chunk, assembles across chunk boundaries (with boundary deduplication), optionally simplifies and smooths, and writes output as PLY or neuroglancer format.
Example meshify run-config.yaml:
# ── Required ──
input_path: /path/to/segmentation.zarr/s0 # Path to zarr/n5 segmentation dataset
output_directory: /path/to/output # Where to write output meshes
# ── All remaining fields are optional ──
# ── Mesh generation ──
downsample_factor: 2 # Downsample volume by this factor before meshing (default: none)
downsample_method: mode # Downsampling method: mode, mode_suppress_zero, or binary (default: mode_suppress_zero)
# ── Simplification & smoothing ──
do_simplification: true # Simplify meshes after assembly (default: true)
target_reduction: 0.99 # Fraction of faces to remove (default: 0.99)
n_smoothing_iter: 10 # Taubin smoothing iterations (default: 10)
check_mesh_validity: false # Require watertight meshes (default: true; disable for ROI)
use_fixed_edge_simplification: true # Preserve chunk boundary edges during simplification (default: false)
do_analysis: false # Compute mesh metrics CSV (default: true)
# ── Multiresolution output ──
do_multires: true # Also generate neuroglancer multilod_draco output (default: false)
num_lods: 3 # Number of levels of detail (default: 3)
multires_strategy: decimate # LOD strategy: decimate or downsample (default: decimate)
decimation_factor: 4 # Face reduction factor per LOD (default: 4)
delete_decimated_meshes: true # Remove intermediate LOD mesh files (default: true)
# ── Coordinate system ──
# Voxel size is read automatically from the dataset metadata (OME-NGFF or
# zarr attributes). Use voxel_size_nm only to override when the metadata is
# missing or incorrect. It affects mesh vertex scaling, not ROI coordinates.
voxel_size_nm: [1000, 1000, 1000] # Override voxel size (ZYX)
# ── Segment properties ──
segment_properties_csv: /path/to/properties.csv # CSV with per-segment metadata
segment_properties_columns: [col1, col2] # Which columns to include (default: all)
segment_properties_id_column: "Object ID" # CSV column with segment IDs (default: "Object ID")
# ── Region of interest ──
roi: # Restrict processing to this subregion
begin: [100, 200, 300] # Start coordinates in dataset world units (ZYX)
end: [500, 600, 700] # End coordinates in dataset world units (ZYX)
# Boundary edges are preserved during simplification.
# Can also be passed via CLI: --roi z0,y0,x0,z1,y1,x1
to-neuroglancer — Convert existing meshes to neuroglancer multiresolution format
mesh-n-bone to-neuroglancer CONFIG_PATH -n NUM_WORKERS [--roi begin_x,begin_y,begin_z,end_x,end_y,end_z]
Takes existing meshes (e.g. PLY files), decimates them at multiple LODs using pyfqmr, decomposes into spatial fragments, Draco-compresses, and writes the neuroglancer multilod_draco format. Use this when you already have single-scale meshes and just need the neuroglancer format.
Config directory must contain run-config.yaml and dask-config.yaml. Example run-config.yaml:
required_settings:
input_path: /path/to/meshes # Directory containing LOD 0 mesh files (e.g. PLY)
output_path: /path/to/output # Where to write neuroglancer output
num_lods: 6 # Number of levels of detail to generate
optional_decimation_settings:
box_size: 4 # LOD 0 fragment size in world units (scalar or [x, y, z])
skip_decimation: false # Set true to reuse previously decimated meshes
decimation_factor: 4 # Face reduction factor per LOD (default: 2)
aggressiveness: 10 # pyfqmr decimation aggressiveness (default: 7)
delete_decimated_meshes: true # Remove intermediate LOD mesh files when done
roi: # Only process meshes intersecting this region (XYZ)
begin: [0, 0, 0]
end: [1000, 1000, 1000]
optional_properties_settings:
segment_properties_csv: /path/to/properties.csv # CSV with per-segment metadata
segment_properties_columns: [col1, col2] # Which columns to include (default: all)
segment_properties_id_column: "Object ID" # CSV column with segment IDs
box_size can be a scalar (applied to all axes) or a 3-element list for per-axis control, which prevents degenerate triangles on elongated meshes.
skeletonize — Skeletonize meshes using CGAL
mesh-n-bone skeletonize CONFIG_PATH -n NUM_WORKERS
Runs CGAL mean curvature flow skeletonization on all meshes in a directory. Produces skeleton files, metrics (longest shortest path, radius statistics, branch counts), and neuroglancer skeleton format output.
skeletonize-single — Skeletonize a single mesh
mesh-n-bone skeletonize-single INPUT_FILE OUTPUT_FILE [--subdivisions N] [--neuroglancer]
analyze — Analyze mesh geometry
mesh-n-bone analyze CONFIG_PATH -n NUM_WORKERS
Computes per-mesh metrics using trimesh and pymeshlab: volume, surface area, curvature (mean, Gaussian, RMS, absolute), thickness (shape diameter function), principal inertia components, and oriented bounding box dimensions. Outputs a CSV.
serve — Serve data for neuroglancer viewing
mesh-n-bone serve PATH [--zarr ZARR_PATH] [--meshes MESHES_PATH] [--port PORT]
Starts a local HTTP server with CORS headers and prints a neuroglancer URL. Use --zarr and --meshes to specify relative paths within PATH to a zarr/n5 volume and precomputed meshes, respectively. Default port is 9015.
Dask configuration
All pipeline commands use Dask for parallelism. The config directory must contain a dask-config.yaml specifying the cluster type. Supported: local, lsf, slurm, sge.
When running with -n 1, no cluster is created and no config file is needed — work runs synchronously in the calling process.
See dask-jobqueue configuration for all cluster options.
Running on an HPC cluster
LSF (bsub)
To run on an LSF cluster, submit the driver process via bsub. The driver launches Dask workers as separate LSF jobs:
bsub -n 2 -P your_project_name mesh-n-bone meshify lsf-config -n 40
This submits a 2-slot driver job that creates a 40-worker Dask cluster. Each worker is launched as its own LSF job using the settings in lsf-config/dask-config.yaml.
With pixi:
bsub -n 2 -P your_project_name pixi run mesh-n-bone meshify lsf-config -n 40
An example LSF dask config is provided in lsf-config/dask-config.yaml:
jobqueue:
lsf:
ncpus: 48
processes: 40
cores: 40
memory: 720GB
walltime: 01:00
mem: 720000000000
use-stdin: true
log-directory: job-logs
name: mesh-n-bone
project: your_project_name
Update project to your LSF project/queue allocation and adjust ncpus, memory, and walltime for your cluster.
SLURM / SGE
The same pattern applies — submit the driver via your scheduler and set the cluster type in dask-config.yaml:
# SLURM
jobqueue:
slurm:
cores: 40
memory: 720GB
walltime: "01:00:00"
# ... other SLURM-specific options
# SGE
jobqueue:
sge:
cores: 40
memory: 720GB
# ... other SGE-specific options
Testing
pixi run -e test test
The test suite includes unit tests and integration tests covering:
- Full meshify pipeline from zarr volumes (cross-chunk assembly, watertightness, volume accuracy)
- Multiresolution decomposition and Draco compression
- Mesh decimation across multiple LODs
- Downsampling methods (mode, mode-suppress-zero, binary mode)
- Mesh analysis metrics (volume, area, curvature, thickness)
- Skeleton processing (pruning, simplification, longest shortest path)
- Watertightness preservation after simplification and repair
- Fixed-edge boundary-preserving simplification
- Neuroglancer format output (ngmesh, multilod_draco, annotations)
Project structure
src/mesh_n_bone/
cli.py # Unified CLI
config.py # YAML config parsing
meshify/ # Volume → mesh generation
meshify.py # Main pipeline (zmesh, chunk assembly)
downsample.py # Numba JIT blockwise downsampling
fixed_edge.py # Boundary-preserving simplification
multires/ # Multiresolution mesh creation
multires.py # Pipeline orchestrator
decomposition.py # Spatial fragment decomposition + Draco
decimation.py # pyfqmr LOD decimation
skeletonize/ # Mesh skeletonization
skeletonize.py # CGAL skeletonization pipeline
skeleton.py # Skeleton data structure + operations
analyze/ # Mesh analysis
analyze.py # Volume, curvature, thickness metrics
util/ # Shared utilities
dask_util.py # Dask cluster management
mesh_io.py # Mesh I/O, fragments, z-order
neuroglancer.py # Neuroglancer format writers
logging.py # Timing, logging, stream capture
cgal_skeletonize_mesh/ # C++ CGAL skeletonizer source + binary
tests/ # Unit and integration tests
Acknowledgments
Thanks to Luca Marconato for the pixi configuration that informed macOS support and the neuroglancer serving approach (#6).
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file mesh_n_bone-0.1.5.tar.gz.
File metadata
- Download URL: mesh_n_bone-0.1.5.tar.gz
- Upload date:
- Size: 117.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
e4b5a58b9d7eedaa20715a366ec5c7cd032381aa86b4c688fa6539c542476a61
|
|
| MD5 |
ab06f3e7a2ec915c541226a84fe7859e
|
|
| BLAKE2b-256 |
0fcd617d20c9c03ffd84e2d03bfebb96f99d27c4f2930ac8b2eac4adac7e8b57
|
Provenance
The following attestation bundles were made for mesh_n_bone-0.1.5.tar.gz:
Publisher:
publish.yml on janelia-cellmap/mesh-n-bone
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mesh_n_bone-0.1.5.tar.gz -
Subject digest:
e4b5a58b9d7eedaa20715a366ec5c7cd032381aa86b4c688fa6539c542476a61 - Sigstore transparency entry: 1426871537
- Sigstore integration time:
-
Permalink:
janelia-cellmap/mesh-n-bone@192456f250f5e6dea1908ba5c85259286cb62f89 -
Branch / Tag:
refs/tags/v0.1.5 - Owner: https://github.com/janelia-cellmap
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@192456f250f5e6dea1908ba5c85259286cb62f89 -
Trigger Event:
push
-
Statement type:
File details
Details for the file mesh_n_bone-0.1.5-py3-none-any.whl.
File metadata
- Download URL: mesh_n_bone-0.1.5-py3-none-any.whl
- Upload date:
- Size: 89.9 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? Yes
- Uploaded via: twine/6.1.0 CPython/3.13.12
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
f5f491f735112e8595cab16c83529754bc1ba8b8cbb88cf5c89a0f09a4ac63fd
|
|
| MD5 |
9853d021bb5b0fdf7375b4669c322cc1
|
|
| BLAKE2b-256 |
3065e7ce6e4fb23b79069cc38c3aca8793e3a529272f5437c887770fa7edecdb
|
Provenance
The following attestation bundles were made for mesh_n_bone-0.1.5-py3-none-any.whl:
Publisher:
publish.yml on janelia-cellmap/mesh-n-bone
-
Statement:
-
Statement type:
https://in-toto.io/Statement/v1 -
Predicate type:
https://docs.pypi.org/attestations/publish/v1 -
Subject name:
mesh_n_bone-0.1.5-py3-none-any.whl -
Subject digest:
f5f491f735112e8595cab16c83529754bc1ba8b8cbb88cf5c89a0f09a4ac63fd - Sigstore transparency entry: 1426871646
- Sigstore integration time:
-
Permalink:
janelia-cellmap/mesh-n-bone@192456f250f5e6dea1908ba5c85259286cb62f89 -
Branch / Tag:
refs/tags/v0.1.5 - Owner: https://github.com/janelia-cellmap
-
Access:
public
-
Token Issuer:
https://token.actions.githubusercontent.com -
Runner Environment:
github-hosted -
Publication workflow:
publish.yml@192456f250f5e6dea1908ba5c85259286cb62f89 -
Trigger Event:
push
-
Statement type: