Define and repeat basic affine transformation tasks for datasets
Project description
standard_transforms
Orient and scale points in EM datasets consistently and easily.
When working with EM data, often the orientation of the dataset does not match the desired orientation in space. For example, in cortical data you might want "down" to correspond to the direction orthogonal to the pial surface. This package includes prebaked affine transforms for two datasets, Minnie65 and V1dd, to convert from voxel or nanometer coordinates to a consistent oriented frame in microns.
Install via pip install standard-transform
.
Usage
Transforms
At its simplest, we import the transform we want, initialize and object, and then are ready to rotate, scale, and translate away!
Let's start with going from nanometer space in Minnie to the oriented space.
We can make the pre-baked transform by importing one of the generating functions, in this case minnie_transform_nm
.
from standard_transform import minnie_transform_nm
tform_nm = minnie_transform_nm()
There are three main useful functions, apply
, apply_project
, and apply_dataframe
.
All functions transform an n x 3
array or pandas series with 3-element vectors to points in a new space, with the y-axis oriented along axis from pia to white matter and the units in microns and the pial surface at approximately y=0.
Using apply
alone returns another n x 3
array, while apply_project
takes both an axis and points and returns just the values along that axis.
For example, if you have skeleton vertices in nm, you can produce transformed ones with:
new_vertices = tform_nm.apply(sk.vertices)
while if you just want the depth:
sk_depth = tform_nm.apply_project('y', sk.vertices)
These two functions can take either 3-element points, n x 3
arrays, or a column of a pandas dataframe with 3-element vectors in each row.
The third function is specifically for use with dataframes, but offers a bit more flexibility. Instead of passing the series, you pass the column name and the dataframe itself.
pts_out = tform_nm.apply_dataframe(column_name, df)
Why is this useful when you can just use tform.apply(df[column_name])
?
It is often handy to work with dataframes with split position columns, where x, y, and z coordinates are in three separate columns.
If they are named {column_name}_x
, {column_name}_y
, and {column_name}_z
, then the apply_dataframe
function will autodetect this split position situation and act seamlessly to generate points out of them.
To get the projection functionality with the apply_dataframe
method, pass it as an additional argument. e.g.
pts_out_x = tform_nm.apply_dataframe(column_name, df, projection='x')
Finally, if you have points in the transformed space and want to back them out to the original coordinates, you can invert the transform of any TransformSequence.
pts_orig = tform.invert(pts_transformed)
Streamlines
Because mammalian cortex is strongly organized in a laminar manner, it's often useful to distinguish radial distance from distance along the depth axis. However, when comparing the radial distance of points at different depths, the situation is complicated because the orientation of the depth axis, which we call a "streamline" is curvilinear, and changes with depth. The Streamline class helps with computing radial distances between arbitrary locations, as well as to store particular streamlines in a consistent way.
Currently we just offer streamlines for the v1dd dataset, but a minnie65 one is forthcoming. Like transforms, you must specify if the original data is in nm or voxels, and they are associated with a particular transform in order to use its y-axis orientation.
from standard_transform import v1dd_streamline_nm, v1dd_streamline_vx
A Streamline object has a few key functions.
To get the x,z locations of a streamline passing through a core point xyz
at a depth or depths y
in the post-transform space:
streamline.streamline_at(xyz, y)
The argument return_as_point
can be set to True
to return an nx3-array instead of the x, z components separately.
To translate the points defining the streamline to intersect with a point xyz in the pre-transform space:
streamline.streamline_points_tform(xyz)
To compute the radial distance in the post-transform space from two points in the pre-transform space:
streamline.radial_distance(xyz0, xyz1)
One can also get the curvilinear distance, or depth along the streamline of a point in the pre or post-transform space and y=0 in the post-transform space:
streamline.depth_along(xyz, depth_from=0)
Similarly, you can put all of these together to get a point in the curvilienar space of the streamline, akin to cylindrical coordinates, where the x axis is radial distance and the y axis is depth along the streamline. For each point, an equivalent location in x,z space is found with the same radial distance and angle as the original x and z.
xyz_rad = streamline.radial_points(xyz0, pts)
NOTE! While better than nothing, the Minnie65 streamline is much more accurate on the VISp side of the dataset (low x values) than on the HVA side (high x values). The right approach will involve creating a streamline that is interpolates values as a function of x and z instead of being a constant. To do this, we need to either create a lot more streamlines by hand (not too hard, but time consuming) or automatically generate streamlines from skeletons. This will be the ideal solution, but for now, treat the streamline as a good approximation for the VISp side of the dataset.
Transforming skeletons
Standard transform has the capability to transform MeshParty skeletons, MeshWorks, and MeshWork annotations using streamline or affine transformation objects. Both streamline and transform objects have similar functionality.
To transform the vertices of a skeleton or meshwork object with the affine transform:
#skeleton
skel_transformed = tform.apply_skeleton(skel)
#meshwork
mw_transformed = tform.apply_meshwork_vertices(mw)
will return a new skeleton object with the transformed vertices. Use the argument inplace=True
to modify the original object.
For a streamline transform, you can straighten the skeleton along the streamline coordinates. Note that this will relocate the soma or other root location to (0,0,0).
# Streamline transform
skel_straightened = streamline.transform_skeleton_vertices(skel)
mw_straightened = streamline.transform_meshwork_vertices(mw)
Similarly, this will return a new object by default, but you can set inplace=True
to modify the original object.
By default, this will use the root location of the skeleton as the anchor for the streamline.
To use a different point, you can specify a root_loc
.
Other arguments follow the function streamline.radial_points
.
Meshwork objects can also have annotations in dataframes as well as vertices. In order to transform these points, we need to specify both the name of the dataframe table and the columns to transform as a dictionary.
# Affine transform
mw_anno_transformed = tform.apply_meshwork_annotations(mw, anno_dict)
# Streamline transform
mw_anno_straightened = streamline.transform_meshwork_annotations(mw, anno_dict)
Here, anno_dict
is a dictionary whose keys are the table names and the column is one or more dataframe columns where each row contains a 3-element point to transform.
For example, to transform the ctr_pt_position
column of the pre_syn
table, you would pass anno_dict={'pre_syn': 'ctr_pt_position'}
.
Similar inplace argument are available as above, as well as radial_point
arguments for the streamline version.
Note that if you want to transform both annotation and morphology for meshwork files, you should call both the vertex transform and annotation transform functions.
Datasets
Datasets are a convenient way to store a transform and streamline together, and are the easiest way to get started.
There are two datasets currently available, minnie65
and v1dd
.
To get a dataset, simply import it and initialize it.
Datasets have a transform and streamline, and can be used to transform points or get streamline information.
For example, to transform a particular location in v1dd
from standard_transform import minnie65_ds, v1dd_ds
v1dd_ds.transform_nm.apply(xyz)
Or to get the streamline at a particular point:
v1dd_ds.streamline_nm.radial_distance(xyz0, xyz1)
A particular resolution can be used by passing it as an argument to dataset.transform_res(voxel_resolution)
or dataset.streamline_res(voxel_resolution)
.
Minnie65
-
minnie_transform_nm
: Transform from nanometer units in the original Minnie65 space to microns in a space where the pial surface is flat in x and z along y=0. -
minnie_transform_vx
: Transform from voxel units in the original Minnie65 space to microns in a space where the pial surface is flat in x and z along y=0. By default,minnie_transform_vx()
assumes a voxel size of[4,4,40]
nm/voxel, but specifying a voxel resolution (for example, withminnie_transform_vx(voxel_resolution=[8,8,40])
) will use a different scale.
Both functions will also take dataframe columns, for example tform.apply(df['pt_position])
.
V1dd
-
v1dd_transform_nm
: Transform from nanometer units in the original V1dd space to microns in a space where the pial surface is flat in x and z along y=0. -
v1dd_transform_vx
: Transform from voxel units in the original Minnie65 space to microns in a space where the pial surface is flat in x and z along y=0. By default,v1dd_transform_vx()
assumes a voxel size of[9,9,45]
nm/voxel, but specifying a voxel resolution (for example, withv1dd_transform_vx(voxel_resolution=[1000, 1000, 1000])
) will use a different scale.
Identity
identity_transform
: This transform returns the input data unchanged (although perhaps axis-projected), but can be useful for compatability purposes.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file standard-transform-1.4.0.tar.gz
.
File metadata
- Download URL: standard-transform-1.4.0.tar.gz
- Upload date:
- Size: 27.0 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.9
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6decb2aae18bf809047a627cbfa636f06ddff6e8bed70d1c61a4e200644eccf2 |
|
MD5 | 5c3b5a33811810f81136ad2060474439 |
|
BLAKE2b-256 | 62bbe228f82427eed10a7b7bd6d0579817d9b097bc7d1a0c7f4a1c9fa2e4c5e8 |