Skeletonize densely labeled image volumes.
Project description
Kimimaro: Skeletonize Densely Labeled Images
# Produce SWC files from volumetric images.
kimimaro forge labels.npy progress # writes to ./kimimaro_out/
kimimaro view kimimaro_out/10.swc
Rapidly skeletonize all nonzero labels in 2D and 3D numpy arrays using a TEASAR derived method. The returned list of skeletons is in the format used by cloudvolume. A skeleton is a stick figure 1D representation of a 2D or 3D object that consists of a graph of verticies linked by edges. A skeleton where the verticies also carry a distance to the nearest boundary they were extracted from is called a "Medial Axis Transform", which Kimimaro provides.
Skeletons are a compact representation that can be used to visualize objects, trace the connectivity of an object, or otherwise analyze the object's geometry. Kimimaro was designed for use with high resolution neurons extracted from electron microscopy data via AI segmentation, but it can be applied to many different fields.
On an Apple Silicon M1 arm64 chip (Firestorm cores 3.2 GHz max frequency), this package processed a 512x512x100 volume with 333 labels in 20 seconds. It processed a 512x512x512 volume (connectomics.npy
) with 2124 labels in 187 seconds.
Fig. 1: A Densely Labeled Volume Skeletonized with Kimimaro
pip
Installation
If a binary is available for your platform:
pip install numpy
pip install kimimaro
Otherwise, you'll also need a C++ compiler:
sudo aptget install python3dev g++ # ubuntu linux
Example
Fig. 2: Memory Usage on a 512x512x512 Densely Labeled Volume (`connectomics.npy`)
Figure 2 shows the memory usage and processessing time (~390 seconds, about 6.5 minutes) required when Kimimaro 1.4.0 was applied to a 512x512x512 cutout, labels, from a connectomics dataset, connectomics.npy
containing 2124 connected components. The different sections of the algorithm are depicted. Grossly, the preamble runs for about half a minute, skeletonization for about six minutes, and finalization within seconds. The peak memory usage was about 4.5 GB. The code below was used to process labels. The processing of the glia was truncated in due to a combination of fix_borders and max_paths.
Kimimaro has come a long way. Version 0.2.1 took over 15 minutes and had a Preamble run time twice as long on the same dataset.
Python Interface
# LISTING 1: Producing Skeletons from a labeled image.
import kimimaro
# Run lzma d connectomics.npy.lzma on the command line to
# obtain this 512 MB segmentation volume. Details below.
labels = np.load("connectomics.npy")
skels = kimimaro.skeletonize(
labels,
teasar_params={
"scale": 1.5,
"const": 300, # physical units
"pdrf_scale": 100000,
"pdrf_exponent": 4,
"soma_acceptance_threshold": 3500, # physical units
"soma_detection_threshold": 750, # physical units
"soma_invalidation_const": 300, # physical units
"soma_invalidation_scale": 2,
"max_paths": 300, # default None
},
# object_ids=[ ... ], # process only the specified labels
# extra_targets_before=[ (27,33,100), (44,45,46) ], # target points in voxels
# extra_targets_after=[ (27,33,100), (44,45,46) ], # target points in voxels
dust_threshold=1000, # skip connected components with fewer than this many voxels
anisotropy=(16,16,40), # default True
fix_branching=True, # default True
fix_borders=True, # default True
fill_holes=False, # default False
fix_avocados=False, # default False
progress=True, # default False, show progress bar
parallel=1, # <= 0 all cpu, 1 single process, 2+ multiprocess
parallel_chunk_size=100, # how many skeletons to process before updating progress bar
)
# LISTING 2: Combining skeletons produced from
# adjacent or overlapping images.
import kimimaro
from cloudvolume import Skeleton
skels = ... # a set of skeletons produced from the same label id
skel = Skeleton.simple_merge(skels).consolidate()
skel = kimimaro.postprocess(
skel,
dust_threshold=1000, # physical units
tick_threshold=3500 # physical units
)
# LISTING 3: Adding cross sectional area to skeletons
# Cross section planes are defined by normal vectors. Those
# vectors come from the difference between adjacent vertices.
skels = ... # one or more skeletons produced from a single image
skels = kimimaro.cross_sectional_area(
labels, skels,
anisotropy=(16,16,40),
smoothing_window=5, # rolling average window of plane normals
progress=True,
)
skel = skels[0]
skel.cross_sectional_area # array of cross sectional areas
skel.cross_sectional_area_contacts # nonzero contacted the image border
# Split input skeletons into connected components and
# then join the two nearest vertices within `radius` distance
# of each other until there is only a single connected component
# or no pairs of points nearer than `radius` exist.
# Fuse all remaining components into a single skeleton.
skel = kimimaro.join_close_components([skel1, skel2], radius=1500) # 1500 units threshold
skel = kimimaro.join_close_components([skel1, skel2], radius=None) # no threshold
# Given synapse centroids (in voxels) and the SWC integer label you'd
# like to assign (e.g. for presynaptic and postsynaptic) this finds the
# nearest voxel to the centroid for that label.
# Input: { label: [ ((x,y,z), swc_label), ... ] }
# Returns: { (x,y,z): swc_label, ... }
extra_targets = kimimaro.synapses_to_targets(labels, synapses)
# LISTING 4: Drawing a centerline between
# preselected points on a binary image.
# This is a much simpler option for when
# you know exactly what you want, but may
# be less efficient for large scale procesing.
skel = kimimaro.connect_points(
labels == 67301298,
start=(3, 215, 202),
end=(121, 426, 227),
anisotropy=(32,32,40),
)
# LISTING 5: Using skeletons to oversegment existing
# segmentations for integration into proofreading systems
# that on merging atomic labels. oversegmented_labels
# is returned numbered from 1. skels is a copy returned
# with the property skel.segments that associates a label
# to each vertex (labels will not be unique if downsampling
# is used)
oversegmented_labels, skels = kimimaro.oversegment(
labels, skels,
anisotropy=(32,32,40),
downsample=10,
)
connectomics.npy
is multilabel connectomics data derived from pinky40, a 2018 experimental automated segmentation of ~1.5 million cubic micrometers of mouse visual cortex. It is an early predecessor to the now public pinky100_v185 segmentation that can be found at https://micronsexplorer.org/phase1 You will need to run lzma d connectomics.npy.lzma
to obtain the 512x512x512 uint32 volume at 32x32x40 nm^{3} resolution.
CLI Interface
The CLI supports producing skeletons from a single image as SWCs and viewing the resulting SWC files one at a time. By default, the SWC files are written to ./kimimaro_out/$LABEL.swc
.
Here's an equivalent example to the code above.
kimimaro forge labels.npy scale 4 const 10 somadetect 1100 somaaccept 3500 somascale 1 somaconst 300 anisotropy 16,16,40 fixborders progress
Visualize the your data:
kimimaro view 1241241.swc # visualize skeleton
kimimaro view labels.npy # visualize segmentation
It can also convert binary image skeletons produced by thinning algorithms into SWC files and back. This can be helpful for comparing different skeletonization algorithms or even just using their results.
kimimaro swc from binary_image.tiff # > binary_image.swc
kimimaro swc to format tiff binary_image.swc # > binary_image.tiff or npy
Tweaking kimimaro.skeletonize
Parameters
This algorithm works by finding a root point on a 3D object and then serially tracing paths via dijksta's shortest path algorithm through a penalty field to the most distant unvisited point. After each pass, there is a sphere (really a circumscribing cube) that expands around each vertex in the current path that marks part of the object as visited.
For a visual tutorial on the basics of the skeletonization procedure, check out this wiki article: A Pictorial Guide to TEASAR Skeletonization
For more detailed information, read below or the TEASAR paper (though we deviate from TEASAR in a few places). [1]
scale
and const
Usually, the most important parameters to tweak are scale
and const
which control the radius of this invalidation sphere according to the equation r(x,y,z) = scale * DBF(x,y,z) + const
where the dimensions are physical (e.g. nanometers, i.e. corrected for anisotropy). DBF(x,y,z)
is the physical distance from the shape boundary at that point.
Check out this wiki article to help refine your intuition.
anisotropy
Represents the physical dimension of each voxel. For example, a connectomics dataset might be scanned with an electron microscope at 4nm x 4nm per pixel and stacked in slices 40nm thick. i.e. anisotropy=(4,4,40)
. You can use any units so long as you are consistent.
dust_threshold
This threshold culls connected components that are smaller than this many voxels.
extra_targets_after
and extra_targets_before
extra_targets_after
provides additional voxel targets to trace to after the morphological tracing algorithm completes. For example, you might add known synapse locations to the skeleton.
extra_targets_before
is the same as extra_targets_after
except that the additional targets are frontloaded and the paths that they cover are invalidated. This may affect the results of subsequent morphological tracing.
max_paths
Limits the number of paths that can be drawn for the given label. Certain cells, such as glia, that may not be important for the current analysis may be expensive to process and can be aborted early.
pdrf_scale
and pdrf_exponent
The pdrf_scale
and pdrf_exponent
represent parameters to the penalty equation that takes the euclidean distance field (D) and augments it so that cutting closer to the border is very penalized to make dijkstra take paths that are more centered.
P_{r} = pdrf_scale
* (1  D / max(D)) ^{pdrf_exponent} + (directional gradient < 1.0).
The default settings should work fairly well, but under large anisotropies or with cavernous morphologies, it's possible that you might need to tweak it. If you see the skeleton go haywire inside a large area, it could be a collapse of floating point precision.
soma_acceptance_threshold
and soma_detection_threshold
We process somas specially because they do not have a tubular geometry and instead should be represented in a hub and spoke manner. soma_acceptance_threshold
is the physical radius (e.g. in nanometers) beyond which we classify a connected component of the image as containing a soma. The distance transform's output is depressed by holes in the label, which are frequently produced by segmentation algorithms on somata. We can fill them, but the hole filling algorithm we use is slow so we would like to only apply it occasionally. Therefore, we set a lower threshold, the soma_acceptance_threshold
, beyond which we fill the holes and retest the soma.
soma_invalidation_scale
and soma_invalidation_const
Once we have classified a region as a soma, we fix root of the skeletonization algorithm at one of the points of maximum distance from the boundary (usually there is only one). We then mark as visited all voxels around that point in a spherical radius described by r(x,y,z) = soma_invalidation_scale * DBF(x,y,z) + soma_invalidation_const
where DBF(x,y,z) is the physical distance from the shape boundary at that point. If done correctly, this can prevent skeletons from being drawn to the boundaries of the soma, and instead pulls the skeletons mainly into the processes extending from the cell body.
fix_borders
This feature makes it easier to connect the skeletons of adjacent image volumes that do not fit in RAM. If enabled, skeletons will be deterministically drawn to the approximate center of the 2D contact area of each place where the shape contacts the border. This can affect the performance of the operation positively or negatively depending on the shape and number of contacts.
fix_branching
You'll probably never want to disable this, but base TEASAR is infamous for forking the skeleton at branch points way too early. This option makes it preferential to fork at a more reasonable place at a significant performance penalty.
fill_holes
Warning: This will remove input labels that are deemed to be holes.
If your segmentation contains artifacts that cause holes to appear in labels, you can preprocess the entire image to eliminate background holes and holes caused by entirely contained inclusions. This option adds a moderate amount of additional processing time at the beginning (perhaps ~30%).
fix_avocados
Avocados are segmentations of cell somata that classify the nucleus separately from the cytoplasm. This is a common problem in automatic segmentations due to the visual similarity of a cell membrane and a nuclear membrane combined with insufficient context.
Skeletonizing an avocado results in a poor skeletonization of the cell soma that will disconnect the nucleus and usually results in too many paths traced around the nucleus. Setting fix_avocados=True
attempts to detect and fix these problems. Currently we handle nonavocados, avocados, cells with inclusions, and nested avocados. You can see examples here.
progress
Show a progress bar once the skeletonization phase begins.
parallel
Use a pool of processors to skeletonize faster. Each process allocatable task is the skeletonization of one connected component (so it won't help with a single label that takes a long time to skeletonize). This option also affects the speed of the initial euclidean distance transform, which is parallel enabled and is the most expensive part of the Preamble (described below).
parallel_chunk_size
This only applies when using parallel. This sets the number of skeletons a subprocess will extract before returning control to the main thread, updating the progress bar, and acquiring a new task. If this value is set too low (e.g. < 1020) the cost of interprocess communication can become significant and even dominant. If it is set too high, task starvation may occur for the other subprocesses if a subprocess gets a particularly hard skeleton and they complete quickly. Progress bar updates will be infrequent if the value is too high as well.
The actual chunk size used will be min(parallel_chunk_size, len(cc_labels) // parallel)
. cc_labels
represents the number of connected components in the sample.
Performance Tips
 If you only need a few labels skeletonized, pass in
object_ids
to bypass processing all the others. Ifobject_ids
contains only a single label, the masking operation will run faster.  You may save on peak memory usage by using a
cc_safety_factor
< 1, only if you are sure the connected components algorithm will generate many fewer labels than there are pixels in your image.  Larger TEASAR parameters scale and const require processing larger invalidation regions per path.
 Set
pdrf_exponent
to a small power of two (e.g. 1, 2, 4, 8, 16) for a small speedup.  If you are willing to sacrifice the improved branching behavior, you can set
fix_branching=False
for a moderate 1.1x to 1.5x speedup (assuming your TEASAR parameters and data allow branching).  If your dataset contains important cells (that may in fact be the seat of consciousness) but they take significant processing power to analyze, you can save them to savor for later by setting
max_paths
to some reasonable level which will abort and proceed to the next label after the algorithm detects that that at least that many paths will be needed.  Parallel distributes work across connected components and is generally a good idea if you have the cores and memory. Not only does it make single runs proceed faster, but you can also practically use a much larger context; that improves soma processing as they are less likely to be cut off. The Preamble of the algorithm (detailed below) is still single threaded at the moment, so task latency increases with size.
 If
parallel_chunk_size
is set very low (e.g. < 10) during parallel operation, interprocess communication can become a significant overhead. Try raising this value.
Motivation
The connectomics field commonly generates very large densely labeled volumes of neural tissue. Skeletons are one dimensional representations of two or three dimensional objects. They have many uses, a few of which are visualization of neurons, calculating global topological features, rapidly measuring electrical distances between objects, and imposing tree structures on neurons (useful for computation and user interfaces). There are several ways to compute skeletons and a few ways to define them [4]. After some experimentation, we found that the TEASAR [1] approach gave fairly good results. Other approaches include topological thinning ("onion peeling") and finding the centerline described by maximally inscribed spheres. Ignacio ArgandaCarreras, an alumnus of the Seung Lab, wrote a topological thinning plugin for Fiji called Skeletonize3d.
There are several implementations of TEASAR used in the connectomics field [3][5], however it is commonly understood that implementations of TEASAR are slow and can use tens of gigabytes of memory. Our goal to skeletonize all labels in a petavoxel scale image quickly showed clear that existing sparse implementations are impractical. While adapting a sparse approach to a cloud pipeline, we noticed that there are inefficiencies in repeated evaluation of the Euclidean Distance Transform (EDT), the repeated evaluation of the connected components algorithm, in the construction of the graph used by Dijkstra's algorithm where the edges are implied by the spatial relationships between voxels, in the memory cost, quadratic in the number of voxels, of representing a graph that is implicit in image, in the unnecessarily large data type used to represent relatively small cutouts, and in the repeated downloading of overlapping regions. We also found that the naive implmentation of TEASAR's "rolling invalidation ball" unnecessarily reevaluated large numbers of voxels in a way that could be loosely characterized as quadratic in the skeleton path length.
We further found that commodity implementations of the EDT supported only binary images. We were unable to find any available Python or C++ libraries for performing Dijkstra's shortest path on an image. Commodity implementations of connected components algorithms for images supported only binary images. Therefore, several libraries were devised to remedy these deficits (see Related Projects).
Why TEASAR?
TEASAR: Treestructure Extraction Algorithm for Accurate and Robust skeletons, a 2000 paper by M. Sato and others [1], is a member of a family of algorithms that transform two and three dimensional structures into a one dimensional "skeleton" embedded in that higher dimension. One might concieve of a skeleton as extracting a stick figure drawing from a binary image. This problem is more difficult than it might seem. There are different situations one must consider when making such a drawing. For example, a stick drawing of a banana might merely be a curved centerline and a drawing of a doughnut might be a closed loop. In our case of analyzing neurons, sometimes we want the skeleton to include spines, short protrusions from dendrites that usually have synapses attached, and sometimes we want only the characterize the run length of the main trunk of a neurite.
Additionally, data quality issues can be challenging as well. If one is skeletonizing a 2D image of a doughnut, but the angle were sufficiently declinated from the ring's orthogonal axis, would it even be possible to perform this task accurately? In a 3D case, if there are breaks or mergers in the labeling of a neuron, will the algorithm function sensibly? These issues are common in both manual and automatic image sementations.
In our problem domain of skeletonizing neurons from anisotropic voxel labels, our chosen algorithm should produce tree structures, handle fine or coarse detail extraction depending on the circumstances, handle voxel anisotropy, and be reasonably efficient in CPU and memory usage. TEASAR fufills these criteria. Notably, TEASAR doesn't guarantee the centeredness of the skeleton within the shape, but it makes an effort. The basic TEASAR algorithm is known to cut corners around turns and branch too early. A 2001 paper by members of the original TEASAR team describes a method for reducing the early branching issue on page 204, section 4.2.2. [2]
TEASAR Derived Algorithm
We implemented TEASAR but made several deviations from the published algorithm in order to improve path centeredness, increase performance, handle bulging cell somas, and enable efficient chunked evaluation of large images. We opted not to implement the gradient vector field step from [2] as our implementation is already quite fast. The paper claims a reduction of 7085% in input voxels, so it might be worth investigating.
In order to work with images that contain many labels, our general strategy is to perform as many actions as possible in such a way that all labels are treated in a single pass. Several of the component algorithms (e.g. connected components, euclidean distance transform) in our implementation can take several seconds per a pass, so it is important that they not be run hundreds or thousands of times. A large part of the engineering contribution of this package lies in the efficiency of these operations which reduce the runtime from the scale of hours to minutes.
Given a 3D labeled voxel array, I, with N >= 0 labels, and ordered triple describing voxel anisotropy A, our algorithm can be divided into three phases, the pramble, skeletonization, and finalization in that order.
I. Preamble
The Preamble takes a 3D image containing N labels and efficiently generates the connected components, distance transform, and bounding boxes needed by the skeletonization phase.
 To enhance performance, if N is 0 return an empty set of skeletons.
 Label the M connected components, I_{cc}, of I.
 To save memory, renumber the connected components in order from 1 to M. Adjust the data type of the new image to the smallest uint type that will contain M and overwrite I_{cc}.
 Generate a mapping of the renumbered I_{cc} to I to assign meaningful labels to skeletons later on and delete I to save memory.
 Compute E, the multilabel anisotropic Euclidean Distance Transform of I_{cc} given A. E treats all interlabel edges as transform edges, but not the boundaries of the image. Black pixels are considered background.
 Gather a list, L_{cc} of unique labels from I_{cc} and threshold which ones to process based on the number of voxels they represent to remove "dust".
 In one pass, compute the list of bounding boxes, B, corresponding to each label in L_{cc}.
II. Skeletonization
In this phase, we extract the tree structured skeleton from each connected component label. Below, we reference variables defined in the Preamble. For clarity, we omit the soma specific processing and hold fix_branching=True
.
For each label l in L_{cc} and B...
 Extract I_{l}, the cropped binary image tightly enclosing l from I_{cc} using B_{l}
 Using I_{l} and B_{l}, extract E_{l} from E. E_{l} is the cropped tightly enclosed EDT of l. This is much faster than recomputing the EDT for each binary image.
 Find an arbitrary foreground voxel and using that point as a source, compute the anisotropic euclidean distance field for I_{l}. The coordinate of the maximum value is now "the root" r.
 From r, compute the euclidean distance field and save it as the distance from root field D_{r}.
 Compute the penalized distance from root field P_{r} =
pdrf_scale
* ((1  E_{l} / max(E_{l})) ^pdrf_exponent
) + D_{r} / max(D_{r}).  While I_{l} contains foreground voxels:
 Identify a target coordinate, t, as the foreground voxel with maximum distance in D_{r} from r.
 Draw the shortest path p from r to t considering the voxel values in P_{r} as edge weights.
 For each vertex v in p, extend an invalidation cube of physical side length computed as
scale
* E_{l}(v) +const
and convert any foreground pixels in I_{l} that overlap with these cubes to background pixels.  (Only if
fix_branching=True
) For each vertex coordinate v in p, set P_{r}(v) = 0.  Append p to a list of paths for this label.
 Using E_{l}, extract the distance to the nearest boundary each vertex in the skeleton represents.
 For each raw skeleton extracted from I_{l}, translate the vertices by B_{l} to correct for the translation the cropping operation induced.
 Multiply the vertices by the anisotropy A to place them in physical space.
If soma processing is considered, we modify the root (r) search process as follows:
 If max(E_{l}) >
soma_detection_threshold
...  Fill toplogical holes in I_{l}. Soma are large regions that often have dust from imperfect automatic labeling methods.
 Recompute E_{l} from this cleaned up image.
 If max(E_{l}) >
soma_acceptance_threshold
, divert to soma processing mode.  If in soma processing mode, continue, else go to step 3 in the algorithm above.
 Set r to the coordinate corresponding to max(E_{l})
 Create an invalidation sphere of physical radius
soma_invalidation_scale
* max(E_{l}) +soma_invalidation_const
and erase foreground voxels from I_{l} contained within it. This helps prevent errant paths from being drawn all over the soma.  Continue from step 4 in the above algorithm.
III. Finalization
In the final phase, we agglomerate the disparate connected component skeletons into single skeletons and assign their labels corresponding to the input image. This step is artificially broken out compared to how intermingled its implementation is with skeletonization, but it's conceptually separate.
Deviations from TEASAR
There were several places where we took a different approach than called for by the TEASAR authors.
Using DAF for Targets, PDRF for Pathfinding
The original TEASAR algorithm defines the Penalized Distance from Root voxel Field (PDRF, P_{r} above) as:
PDRF = 5000 * (1  DBF / max(DBF))^16 + DAF
DBF is the Distance from Boundary Field (E_{l} above) and DAF is the Distance from Any voxel Field (D_{r} above).
We found the addition of the DAF tended to perturb the skeleton path from the centerline better described by the inverted DBF alone. We also found it helpful to modify the constant and exponent to tune cornering behavior. Initially, we completely stripped out the addition of the DAF from the PDRF, but this introduced a different kind of problem. The exponentiation of the PDRF caused floating point values to collapse in wide open spaces. This made the skeletons go crazy as they traced out a path described by floating point errors.
The DAF provides a very helpful gradient to follow between the root and the target voxel, we just don't want that gradient to knock the path off the centerline. Therefore, in light of the fact that the PDRF base field is very large, we add the normalized DAF which is just enough to overwhelm floating point errors and provide direction in wide tubes and bulges.
The original paper also called for selecting targets using the max(PDRF) foreground values. However, this is a bit strange since the PDRF values are dominated by boundary effects rather than a pure distance metric. Therefore, we select targets from the max(DAF) forground value.
Zero Weighting Previous Paths (fix_branching=True
)
The 2001 skeletonization paper [2] called for correcting early forking by computing a DAF using already computed path vertices as field sources. This allows Dijkstra's algorithm to trace the existing path cost free and diverge from it at a closer point to the target.
As we have strongly deemphasized the role of the DAF in dijkstra path finding, computing this field is unnecessary and we only need to set the PDRF to zero along the path of existing skeletons to achieve this effect. This saves us an expensive repeated DAF calculation per path.
However, we still incur a substantial cost for taking this approach because we had been computing a dijkstra "parental field" that recorded the shortest path to the root from every foreground voxel. We then used this saved result to rapidly compute all paths. However, as this zero weighting modification makes successive calculations dependent upon previous ones, we need to compute Dijkstra's algorithm anew for each path.
NonOverlapped Chunked Processing (fix_borders=True
)
When processing large volumes, a sensible approach for mass producing skeletons is to chunk the volume, process the chunks independently, and merge the resulting skeleton fragments at the end. However, this is complicated by the "edge effect" induced by a loss of context which makes it impossible to expect the endpoints of skeleton fragments produced by adjacent chunks to align. In contrast, it is easy to join mesh fragments because the vertices of the edge of mesh fragments lie at predictable identical locations given one pixel of overlap.
Previously, we had used 50% overlap to join adjacent skeleton fragments which increased the compute cost of skeletonizing a large volume by eight times. However, if we could force skeletons to lie at predictable locations on the border, we could use single pixel overlap and copy the simple mesh joining approach. As an (incorrect but useful) intuition for how one might go about this, consider computing the centroid of each connected component on each border plane and adding that as a required path target. This would guarantee that both sides of the plane connect at the same pixel. However, the centroid may not lie inside of nonconvex hulls so we have to be more sophisticated and select some real point inside of the shape.
To this end, we again repurpose the euclidean distance transform and apply it to each of the six planes of connected components and select the maximum value as a mandatory target. This works well for many types of objects that contact a single plane and have a single maximum. However, we must treat the corners of the box and shapes that have multiple maxima.
To handle shapes that contact multiple sides of the box, we simply assign targets to all connected components. If this introduces a cycle in postprocessing, we already have cycle removing code to handle it in Igneous. If it introduces tiny useless appendages, we also have code to handle this.
If a shape has multiple distance transform maxima, it is important to choose the same pixel without needing to communicate between spatially adjacent tasks which may run at different times on different machines. Additionally, the same plane on adjacent tasks has the coordinate system flipped. One simple approach might be to pick the coordinate with minimum x and y (or some other coordinate based criterion) in one of the coordinate frames, but this requires tracking the flips on all six planes and is annoying. Instead, we use a series of coordinatefree topology based filters which is both more fun, effort efficient, and picks something reasonable looking. A valid criticism of this approach is that it will fail on a perfectly symmetrical object, but these objects are rare in biological data.
We apply a series of filters and pick the point based on the first filter it passes:
 The voxel closest to the centroid of the current label.
 The voxel closest to the centroid of the image plane.
 Closest to a corner of the plane.
 Closest to an edge of the plane.
 The previously found maxima.
It is important that filter #1 be based on the shape of the label so that kinks are minimimized for convex hulls. For example, originally we used only filters two thru five, but this caused skeletons for neurites located away from the center of a chunk to suddenly jink towards the center of the chunk at chunk boundaries.
Rolling Invalidation Cube
The original TEASAR paper calls for a "rolling invalidation ball" that erases foreground voxels in step 6(iii). A naive implementation of this ball is very expensive as each voxel in the path requires its own ball, and many of these voxels overlap. In some cases, it is possible that the whole volume will need to be pointlessly reevaluated for every voxel along the path from root to target. While it's possible to special case the worst case, in the more common general case, a large amount of duplicate effort is expended.
Therefore, we applied an algorithm using topological cues to perform the invalidation operation in linear time. For simplicity of implmentation, we substituted a cube shape instead of a sphere. The function name roll_invalidation_cube
is intended to evoke this awkwardness, though it hasn't appeared to have been important.
The twopass algorithm is as follows. Given a binary image I, a skeleton S, and a set of vertices V:
 Let B_{v} be the set of bounding boxes that inscribe the spheres indicated by the TEASAR paper.
 Allocate a 3D signed integer array, T, the size and dimension of I representing the topology. T is initially set to all zeros.
 For each B_{v}:
 Set T(p) += 1 for all points p on B_{v}'s left boundary along the xaxis.
 Set T(p) = 1 for all points p on B_{v}'s right boundary along the xaxis.
 Compute the bounding box B_{global} that inscribes the union of all B_{v}.
 A point p travels along the xaxis for each row of B_{global} starting on the YZ plane.
 Set integer coloring = 0
 At each index, coloring += T(p)
 If coloring > 0 or T(p) is nonzero (we're on the leaving edge), we are inside an invalidation cube and start converting foreground voxels into background voxels.
Related Projects
Several classic algorithms had to be specially tuned to make this module possible.
 edt: A single pass, multilabel anisotropy supporting euclidean distance transform implementation.
 dijkstra3d: Dijkstra's shortestpath algorithm defined on 26connected 3D images. This avoids the time cost of edge generation and wasted memory of a graph representation.
 connectedcomponents3d: A connected components implementation defined on 26connected 3D images with multiple labels.
 fastremap: Allows high speed renumbering of labels from 1 in a 3D array in order to reduce memory consumption caused by unnecessarily large 32 and 64bit labels.
 fill_voids: High speed binary_fill_holes.
 xs3d: Cross section analysis of 3D images.
This module was originally designed to be used with CloudVolume and Igneous.
 CloudVolume: Serverless client for reading and writing petascale chunked images of neural tissue, meshes, and skeletons.
 Igneous: Distributed computation for visualizing connectomics datasets.
Some of the TEASAR modifications used in this package were first demonstrated by Alex Bae.
 skeletonization: Python implementation of modified TEASAR for sparse labels.
Credits
Alex Bae developed the precursor skeletonization package and several modifications to TEASAR that we use in this package. Alex also developed the postprocessing approach used for stitching skeletons using 50% overlap. Will Silversmith adapted these techniques for mass production, refined several basic algorithms for handling thousands of labels at once, and rewrote them into the Kimimaro package. Will added trickle DAF, zero weighted previously explored paths, and fixing borders to the algorithm. A.M. Wilson and Will designed the nucleus/soma "avocado" fuser. Forrest Collman added parameter flexibility and helped tune DAF computation performance. Sven Dorkenwald and Forrest both provided helpful discussions and feedback. Peter Li redesigned the target selection algorithm to avoid bilinear performance on complex cells.
Acknowledgments
We are grateful to our partners in the Seung Lab, the Allen Institute for Brain Science, and the Baylor College of Medicine for providing the data and problems necessitating this library.
This research was supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior/ Interior Business Center (DoI/IBC) contract number D16PC0005, NIH/NIMH (U01MH114824, U01MH117072, RF1MH117815), NIH/NINDS (U19NS104648, R01NS104926), NIH/NEI (R01EY027036), and ARO (W911NF1210594). The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/IBC, or the U.S. Government. We are grateful for assistance from Google, Amazon, and Intel.
Papers Using Kimimaro
Please cite Kimimaro using the CITATION.cff file located in this repository.
The below list is not comprehensive and is sourced from collaborators or found using internet searches and does not constitute an endorsement except to the extent that they used it for their work.
 A.M. Wilson, R. Schalek, A. SuissaPeleg, T.R. Jones, S. KnowlesBarley, H. Pfister, J.M. Lichtman. "Developmental Rewiring between Cerebellar Climbing Fibers and Purkinje Cells Begins with Positive Feedback Synapse Addition". Cell Reports. Vol. 29, Iss. 9, November 2019. Pgs. 28492861.e6 doi: 10.1016/j.celrep.2019.10.081 (link)
 S. Dorkenwald, N.L. Turner, T. Macrina, K. Lee, R. Lu, J. Wu, A.L. Bodor, A.A. Bleckert, D. Brittain, N. Kemnitz, W.M. Silversmith, D. Ih, J. Zung, A. Zlateski, I. Tartavull, S. Yu, S. Popovych, W. Wong, M. Castro, C. S. Jordan, A.M. Wilson, E. Froudarakis, J. Buchanan, M. Takeno, R. Torres, G. Mahalingam, F. Collman, C. SchneiderMizell, D.J. Bumbarger, Y. Li, L. Becker, S. Suckow, J. Reimer, A.S. Tolias, N. Maçarico da Costa, R. C. Reid, H.S. Seung. "Binary and analog variation of synapses between cortical pyramidal neurons". bioRXiv. December 2019. doi: 10.1101/2019.12.29.890319 (link)
 N.L. Turner, T. Macrina, J.A. Bae, R. Yang, A.M. Wilson, C. SchneiderMizell, K. Lee, R. Lu, J. Wu, A.L. Bodor, A.A. Bleckert, D. Brittain, E. Froudarakis, S. Dorkenwald, F. Collman, N. Kemnitz, D. Ih, W.M. Silversmith, J. Zung, A. Zlateski, I. Tartavull, S. Yu, S. Popovych, S. Mu, W. Wong, C.S. Jordan, M. Castro, J. Buchanan, D.J. Bumbarger, M. Takeno, R. Torres, G. Mahalingam, L. Elabbady, Y. Li, E. Cobos, P. Zhou, S. Suckow, L. Becker, L. Paninski, F. Polleux, J. Reimer, A.S. Tolias, R.C. Reid, N. Maçarico da Costa, H.S. Seung. "Multiscale and multimodal reconstruction of cortical structure and function". bioRxiv. October 2020; doi: 10.1101/2020.10.14.338681 (link)
 P.H. Li, L.F. Lindsey, M. Januszewski, Z. Zheng, A.S. Bates, I. Taisz, M. Tyka, M. Nichols, F. Li, E. Perlman, J. MaitinShepard, T. Blakely, L. Leavitt, G. S.X.E. Jefferis, D. Bock, V. Jain. "Automated Reconstruction of a SerialSection EM Drosophila Brain with FloodFilling Networks and Local Realignment". bioRxiv. October 2020. doi: 10.1101/605634 (link)
References
 M. Sato, I. Bitter, M.A. Bender, A.E. Kaufman, and M. Nakajima. "TEASAR: Treestructure Extraction Algorithm for Accurate and Robust Skeletons". Proc. 8th Pacific Conf. on Computer Graphics and Applications. Oct. 2000. doi: 10.1109/PCCGA.2000.883951 (link)
 I. Bitter, A.E. Kaufman, and M. Sato. "Penalizeddistance volumetric skeleton algorithm". IEEE Transactions on Visualization and Computer Graphics Vol. 7, Iss. 3, JulSep 2001. doi: 10.1109/2945.942688 (link)
 T. Zhao, S. Plaza. "Automatic Neuron Type Identification by Neurite Localization in the Drosophila Medulla". Sept. 2014. arXiv:1409.1892 [qbio.NC] (link)
 A. Tagliasacchi, T. Delame, M. Spagnuolo, N. Amenta, A. Telea. "3D Skeletons: A StateoftheArt Report". May 2016. Computer Graphics Forum. Vol. 35, Iss. 2. doi: 10.1111/cgf.12865 (link)
 P. Li, L. Lindsey, M. Januszewski, Z. Zheng, A. Bates, I. Taisz, M. Tyka, M. Nichols, F. Li, E. Perlman, J. MaitinShepard, T. Blakely, L. Leavitt, G. Jefferis, D. Bock, V. Jain. "Automated Reconstruction of a SerialSection EM Drosophila Brain with FloodFilling Networks and Local Realignment". April 2019. bioRXiv. doi: 10.1101/605634 (link)
 M.M. McKerns, L. Strand, T. Sullivan, A. Fang, M.A.G. Aivazis, "Building a framework for predictive science", Proceedings of the 10th Python in Science Conference, 2011; http://arxiv.org/pdf/1202.1056
 Michael McKerns and Michael Aivazis, "pathos: a framework for heterogeneous computing", 2010 ; http://trac.mystic.cacr.caltech.edu/project/pathos
Project details
Release history Release notifications  RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
Hashes for kimimaro4.1.2cp312cp312win_amd64.whl
Algorithm  Hash digest  

SHA256  975b2c6fc77479dd422e7795b020b228663657cb9ac4705b8935cb040e7df06b 

MD5  4117d312214f95c5f8d5380ef21eadd3 

BLAKE2b256  b3f7fd4ed75e6bc3792e08fa96d46e9d6b7e0bfc1fe192967579cc5b19e8475a 
Hashes for kimimaro4.1.2cp312cp312win32.whl
Algorithm  Hash digest  

SHA256  d8f0c87a6e85abedef30c21a236cf4dc8bbeb8c2d84c3ba38efe59c3fb2c0a6e 

MD5  615fa6a97052b9345305a69237b299ec 

BLAKE2b256  d79c3a10ce2fb581ab5ed5ec1d031779c12b1e6fbf04d903a16b03439a232d27 
Hashes for kimimaro4.1.2cp312cp312manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm  Hash digest  

SHA256  0990b8bfadbb6853f5c9c2a2cb59a84059a1417a450b4a26ea82c40df343bf76 

MD5  8807ad71a20a833e7aa40499af375911 

BLAKE2b256  bd409e845c333ee570b082b947579829bb44bbaec2615041944baff2ce997e7a 
Hashes for kimimaro4.1.2cp312cp312manylinux_2_17_i686.manylinux2014_i686.whl
Algorithm  Hash digest  

SHA256  b86a24694fd1a2ddfa3cdb068620aab5cc0d4c0441fe5ec51f733b63e91be9c2 

MD5  b386c1878c5a6e6c8bd1798569eebd91 

BLAKE2b256  96c771d45ea1e88764a62885f4457db571d55ff81555dc9b74f9deb97e563b28 
Hashes for kimimaro4.1.2cp312cp312manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm  Hash digest  

SHA256  36403453ce024f00ba7917230929b206f658e60bb5010e81b1e5ebc06f6ecf0c 

MD5  c3aaa6a74a5cad00f6872b8baefddfb6 

BLAKE2b256  df6594b110bc0f98c9c68eef63e40542bb13bf8c6b4ab7313a30e01663479e77 
Hashes for kimimaro4.1.2cp312cp312macosx_11_0_arm64.whl
Algorithm  Hash digest  

SHA256  9d34e11e02a3452f279ed94e8a6b99661ec3cd9e2046fb894c8db8a0144935a5 

MD5  ab7d1cd98a1b5e005e081af3e90dcbed 

BLAKE2b256  c78ee1bbd6378c23e132431b96306af5f2e2fd5c3704fb8b21060954ba492f2a 
Hashes for kimimaro4.1.2cp312cp312macosx_10_9_x86_64.whl
Algorithm  Hash digest  

SHA256  4eaa879ed941313914e6b5d23e5994c6eceb33a59d4cea13eebe94557f94fb1c 

MD5  717491c9d4528aeec871b8589629dcc3 

BLAKE2b256  28ee093148d931eb181314f8786fb32925b42c9b7152abc21037781c9b77b9a8 
Hashes for kimimaro4.1.2cp311cp311win_amd64.whl
Algorithm  Hash digest  

SHA256  73a07dcb13ed622f923dcc501d92ce5b76f74ac4eed4666a13a1f76a82fece1d 

MD5  2dc0501831898011e7132d178e10433b 

BLAKE2b256  5454685833176dbf431dd01943ce0753b84b3a4e0fb823d323958c13c917c670 
Hashes for kimimaro4.1.2cp311cp311win32.whl
Algorithm  Hash digest  

SHA256  080e104f115cf9fab9a092528f14105a7b964a96eadb23ebac6746100f7ba6cc 

MD5  67ddf3fbc969840cd29e09d256bf6fc8 

BLAKE2b256  52c8806b000bfd6117917dfe367286270a3cf27289f411728aa48a007ec64b4e 
Hashes for kimimaro4.1.2cp311cp311manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm  Hash digest  

SHA256  0286c0bedb266d050c5547a2aa192e07431c639e55a2ffef21df2d985ca74c38 

MD5  acd6610f7c7af9dd0891ad502926b123 

BLAKE2b256  3f382508f5e67429c954c9de464c6799c3f3f0f4dfc70a804a660530032be3df 
Hashes for kimimaro4.1.2cp311cp311manylinux_2_17_i686.manylinux2014_i686.whl
Algorithm  Hash digest  

SHA256  2b454b081e5217c21967a2b6b5bb4e57f21b290c54a04bf8b4b23cd756c822fc 

MD5  34a32939184035c0b72252c10849577d 

BLAKE2b256  4ba673178a32f92750e1027c5b57394f29fc07070debf655075ed2dcffeefe11 
Hashes for kimimaro4.1.2cp311cp311manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm  Hash digest  

SHA256  3c955f08554a200a67168de52325d5c001632f1a65ade997494f8abcc42c60d0 

MD5  778cb588760684ddf731b69bc0cfaf2e 

BLAKE2b256  6e3435cfed08e83b150138622b5307a57fc754441bc9f339dacc228a3599aabf 
Hashes for kimimaro4.1.2cp311cp311macosx_11_0_arm64.whl
Algorithm  Hash digest  

SHA256  3706512b6cfad4920d18f7c65038394633e5b1349989dc8428e4e781a0f1a3ba 

MD5  a75589c6ac2da894ea3fc85ace43f69e 

BLAKE2b256  e40cbde760c0deb0bac9ee86abbbdd7793460c2906fbd9e2bf8bb4817f6fc686 
Hashes for kimimaro4.1.2cp311cp311macosx_10_9_x86_64.whl
Algorithm  Hash digest  

SHA256  f50244ef666d7c32c2fd963864b8e438c928e609da7f85892a9057c55e11dd82 

MD5  ec83273ee359202135c50ba83285fb06 

BLAKE2b256  d0101567e0d4cff222f009d76b613f9e47c7a0266a69c99d65b6f373f451ebe6 
Hashes for kimimaro4.1.2cp310cp310win_amd64.whl
Algorithm  Hash digest  

SHA256  7e475876045b8c0755ad332d906c83653851b72b0d22789371842232f40b136f 

MD5  adea63a640310fc743c1146225733c1f 

BLAKE2b256  48a86d15069bf848f195d662fb118c29ab295b1fd1787229c66f3df0c07fbda7 
Hashes for kimimaro4.1.2cp310cp310win32.whl
Algorithm  Hash digest  

SHA256  0e454a46cb2a42fdfabc00f682d4e020f6a149a7846072e3a20fdb7edea7cc7b 

MD5  0ce887866dca7a9d9a7f0ca610a41a12 

BLAKE2b256  59a2ce33c630827f092c95dde629410abc7252b86713091ab6b7aed6e59ec453 
Hashes for kimimaro4.1.2cp310cp310manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm  Hash digest  

SHA256  03f53f37834fe500af385f0daefb2a7112f30f9fccc0a7085f8ebe123e93aeb6 

MD5  293c650a78ebe6b8b12c7af01b0a3dde 

BLAKE2b256  3a10d1d4956d2a4b12fd20187f1eb2ddd1cdd8658044cbc8c0700b7d4ab315c7 
Hashes for kimimaro4.1.2cp310cp310manylinux_2_17_i686.manylinux2014_i686.whl
Algorithm  Hash digest  

SHA256  02ab0ef2b795398c72a452753eeac3928768e2e7bb293a6d1521012e66bc6f8e 

MD5  b288b3a7d155f2ba9ddfda740a35cadb 

BLAKE2b256  b02f3cade4cc1b24731f527c1859e0661fddabbf92184085b95748cecd30358e 
Hashes for kimimaro4.1.2cp310cp310manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm  Hash digest  

SHA256  b1172722a3bc3e9ee2de70aa97db1a4aaf54f190b7bce15662a6c43a82a61a68 

MD5  6882537d6046b30c3f4178a3e4c060ce 

BLAKE2b256  d7c083a30378c50a668b1869acc5eee11ac37bd428df8f5a5753fe80dddab73d 
Hashes for kimimaro4.1.2cp310cp310macosx_11_0_arm64.whl
Algorithm  Hash digest  

SHA256  d6e577e5822dbe20e828b96eb9f371fcd866b93cf76214cd89a51ac385fc6a41 

MD5  d5c8a313e140d8ba1942eea807171995 

BLAKE2b256  9f09d64ba2917fe397e5a008b77704ce4b35b2eb695f62adc28f3152fc033355 
Hashes for kimimaro4.1.2cp310cp310macosx_10_9_x86_64.whl
Algorithm  Hash digest  

SHA256  1f813143f008fa0fabe35b7a539dbb976bcc87dee4db7e00cc5dc232e068abb3 

MD5  ec099399b2b1b0d7ae820a73f7fe90a6 

BLAKE2b256  1ccaaba4721c0bec36acd6284b375dfbe7691131a6d3742d746a848d4b15f2d3 
Hashes for kimimaro4.1.2cp39cp39win_amd64.whl
Algorithm  Hash digest  

SHA256  78b92d5affffe37dbbab13eed5e6c7a9784a6ca5ca5e6324a32784d66c04d9c6 

MD5  aa345388b83f0ea1e9dbd2bd5ae92d24 

BLAKE2b256  a118df3eb9e41d4ff5efb5e38b95ef7a52bd74000b92955ff29acbfd2ed791ed 
Hashes for kimimaro4.1.2cp39cp39win32.whl
Algorithm  Hash digest  

SHA256  ccf20c78a8d28564b2f8dd882a231d0cf120c1a4e831afffe40ff2fb3cf73f0b 

MD5  a318daff1055e09864b7a8a959ae22fe 

BLAKE2b256  0f677edc11b76b62625ab0c64403443525495ede17596af0966713ef29721a6a 
Hashes for kimimaro4.1.2cp39cp39manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm  Hash digest  

SHA256  099f612d16ed35945347f82e7e8e1e8ae34add069ffedac52f37b0edd4ca04d4 

MD5  eaba4039b7df1dad095fd2d5a526bec0 

BLAKE2b256  f56fd7ed5abc343a407189ce49a17291a38995fab1d27fcbe310d207e289635c 
Hashes for kimimaro4.1.2cp39cp39manylinux_2_17_i686.manylinux2014_i686.whl
Algorithm  Hash digest  

SHA256  3432687a10b40576e1ca672d6dd28d98ba569ac2ec769add58bb9dbc13970d4a 

MD5  e4d81df2792ea9bff261a73e7fcf8634 

BLAKE2b256  3e2300398c2c16b81216b717d848e78566370f913f5a68c47e526c944d2611e6 
Hashes for kimimaro4.1.2cp39cp39manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm  Hash digest  

SHA256  3262ee0d45c82ed65663acb6185c1576d621312f3abd4918e31e85e21ca2f973 

MD5  b4064470aba0df81bbd4b4d4d0a16bb6 

BLAKE2b256  c440e8425397d5271852d99e33d5272fafacabca1e54b9350b27c67ed206b7ae 
Hashes for kimimaro4.1.2cp39cp39macosx_11_0_arm64.whl
Algorithm  Hash digest  

SHA256  000a85dd3332685c2adbe7175b7eb9516d06461241f03b5f53d27ecc220aea60 

MD5  1c7e9a8e360a3d886e017d4b5640d089 

BLAKE2b256  c53dd535fde38f515be933643be5ddd057fad311b016ef38e5afdaf966c50682 
Hashes for kimimaro4.1.2cp39cp39macosx_10_9_x86_64.whl
Algorithm  Hash digest  

SHA256  fa649e53a829d1962e882e169b0496e73e7555b7a10874c90964e3e9f3db017e 

MD5  db8e7a1ee8d99b49a343d6c829728c10 

BLAKE2b256  ce2cfb40feaf6e3ef5b6cf154a45e33200cbafcf721c99735e589a8b94dd8f3b 
Hashes for kimimaro4.1.2cp38cp38win_amd64.whl
Algorithm  Hash digest  

SHA256  c32b822252d8b2b62095f949ca6ea35ab5b8249ca234290d39d1afe0e55d7630 

MD5  90f913e832f1c5a952a7bb79d390f00e 

BLAKE2b256  9b48003cf068d4eb3b242b8a4c96bd437d688cba5f5d0323561f176c49518030 
Hashes for kimimaro4.1.2cp38cp38win32.whl
Algorithm  Hash digest  

SHA256  ff32a901ccb6468e7ee4268a747f5b7193b72e26ce5170b2063605b0aecb95f2 

MD5  7da4f25132e56fe658254ec33f37c120 

BLAKE2b256  d4632bf30f6f07503b028ef8afa4118ce3f2323cf962b79992f4b4628c4aca1b 
Hashes for kimimaro4.1.2cp38cp38manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm  Hash digest  

SHA256  0dbc0b64996846bfd19413e3eb45351102fbbba1041d801f1886033c1a047bd8 

MD5  3e9002d8e36911a4e187aa8a0a165ac7 

BLAKE2b256  6c237c994e676432d8fb6cbcc790ad55e453769457101c6a815a4aa720cda474 
Hashes for kimimaro4.1.2cp38cp38manylinux_2_17_i686.manylinux2014_i686.whl
Algorithm  Hash digest  

SHA256  d1cdfde93f635f1a1e16ff94ba026d3968c9a9feb320ceed85ef4f849b7ce9fa 

MD5  0081dea6b8a2ec6d2acd088717b090f4 

BLAKE2b256  4ba231ceb915c5bdb35fdbc18291555267e337f65e2a962909e6c98c3037b143 
Hashes for kimimaro4.1.2cp38cp38manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm  Hash digest  

SHA256  c41d641bf479c9250d13c796b1d1d55f7b227451071224ef51a2bf8da6083dd4 

MD5  3b9c074595959a37838ddadcc8aa267d 

BLAKE2b256  a3c50de02459d9be21e5c2fa0c03b16d976508a157a59fcce8c645c5c8127901 
Hashes for kimimaro4.1.2cp38cp38macosx_11_0_arm64.whl
Algorithm  Hash digest  

SHA256  d78ebb1a4c78decb46be4a6f5978cc4bbdb02012869a5d08c9296fdc4246ee03 

MD5  dd75a4a8e0a02ac56fb48e07b951fa3b 

BLAKE2b256  4379b3643713efe8b0407a58cd0eaddb9bd5ca3e983acfc70f02b2b02384d0c6 
Hashes for kimimaro4.1.2cp38cp38macosx_10_9_x86_64.whl
Algorithm  Hash digest  

SHA256  be16a2360a042d6c95cc0d05cf76a7094b4255296d71aa924cbf827a185b5e0e 

MD5  ff2e5e292dd6e1d88184be0ca6a0e5f2 

BLAKE2b256  1e3b5f5d63c0f5603992659722b7fd6f712a4d9014509a6e450657131e6e3e24 