Skip to main content

A suite for postprocessing time-series extracted from videos of freely moving rodents using DeepLabCut

Reason this release was yanked:

pre-release

Project description

Pipeline Coverage Documentation Status CodeFactor Version MLFPM Black


A suite for postprocessing time-series extracted from videos of freely moving rodents using DeepLabCut

You can use this package to either extract pre-defined motifs from the time series (such as time-in-zone, climbing, basic social interactions) or to embed your data into a sequence-aware latent space to extract meaningful motifs in an unsupervised way! Both of these can be used within the package, for example, to automatically compare user-defined experimental groups.

How do I start?

Installation:

open a terminal (with python>3.6 installed) and type: pip install deepof

Before we delve in:

To start, create a folder for your project with at least two subdirectories inside, called 'Videos' and 'Tables'. The former should contain the videos you're working with (either you original data or the labeled ones obtained from DLC); the latter should have all the tracking tables you got from DeepLabCut, either in .h5 or .csv format. If you don't want to use DLC yourself, don't worry: a compatible pre-trained model for mice will be released soon!

   my_project
   ├── Videos -> all tagged videos
   ├── Tables -> all tracking tables (.h5 or .csv)

IMPORTANT: You should make sure that the tables and videos correspond to the same experiments. While the names should be compatible, this is handled by DLC by default.

Basic usage:

The main module with which you'll interact is called deepof.data. Let's import it and create a project:

import deepof.data
my_project = deepof.data.Project(path="./my_project",
                                 arena_dims=380,        # diameter of the arena in milimeters
                                 arena_type="circular", # type of the filmed arena (optional). So far, only "circular" is valid
                                 smooth_alpha=2,        # smoothing coefficient (optional)
                                 frame_rate=25)         # frame rate of the videos in Hz (optional)

This command will create a deepof.data.Project object storing all the necessary information to start. The smooth_alpha parameter will control how much smoothing will be applied to your trajectories, using an exponentially weighted average. Values close to 0 apply a stronger smoothing, and values close to 1 a very light one. In practice, we recommend values between 0.95 and 0.99 if your trajectories are not too noisy. There are other things you can do here, but let's stick to the basics for now.

One you have this, you can run you project using the .run() method, which will do quite a lot of computing under the hood (load your data, smooth your trajectories, compute distances and angles). The returned object belongs to the deepof.data.Coordinates class.

my_project = my_project.run(verbose=True)

Once you have this, you can do several things! But let's first explore how the results of those computations I mentioned are stored. To extract trajectories, distances and/or angles, you can respectively type:

my_project_coords = my_project.get_coords(center=True, polar=False, speed=0, align="Nose", align_inplace=True)
my_project_dists  = my_project.get_distances(speed=0)
my_project_angles = my_project.get_angles(speed=0)

Here, the data are stored as deepof.data.table_dict instances. These are very similar to python dictionaries with experiment IDs as keys and pandas.DataFrame objects as values, with a few extra methods for convenience. Peeping into the parameters you see in the code block above, center centers your data (it can be either a boolean or one of the body parts in your model! in which case the coordinate origin will be fixed to the position of that point); polar makes the .get_coords() method return polar instead of Cartesian coordinates, and speed indicates the derivation level to apply (0 is position-based, 1 speed, 2 acceleration, 3 jerk, etc). Regarding align and align-inplace, they take care of aligning the animal position to the y Cartesian axis: if we center the data to "Center" and set align="Nose", align_inplace=True, all frames in the video will be aligned in a way that will keep the Center-Nose axis fixed. This is useful to constrain the set of movements that one can extract with out unsupervised methods.

As mentioned above, the two main analyses that you can run are supervised and unsupervised. They are executed by the .supervised_annotation() method, and the .deep_unsupervised_embedding() methods of the deepof.data.Coordinates class, respectively.

supervised_annot = my_project.supervised_annotation()
gmvae_embedding  = my_project.deep_unsupervised_embedding()

The former returns a deepof.data.TableDict object, with a pandas.DataFrame per experiment containing a series of annotations. The latter is a bit more complicated: it returns an array containing the encoding of the data per animal, another one with motif membership per time point (probabilities of the animal doing whatever is represented by each of the clusters at any given time), an abstract distribution (a multivariate Gaussian mixture) representing the extracted components, and a decoder you can use to generate samples from each of the extracted components (yeah, you get a generative model for free).

That's it for this (very basic) introduction. More detailed documentation, tutorials and method explanation will follow, so stay tuned!


This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 813533


Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

deepof-0.1.75.tar.gz (1.3 MB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

deepof-0.1.75-py3-none-any.whl (1.3 MB view details)

Uploaded Python 3

File details

Details for the file deepof-0.1.75.tar.gz.

File metadata

  • Download URL: deepof-0.1.75.tar.gz
  • Upload date:
  • Size: 1.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for deepof-0.1.75.tar.gz
Algorithm Hash digest
SHA256 e0152cba01e07cb49e24a18c135fb59c31d2d5b8c127539e4c006f0d998c1941
MD5 d55920f9a0405f359b3d3e2b3032f51b
BLAKE2b-256 550e8a945584e4c923ac66466e5b15a53dd5ed395d421a4a665bd4c91205c59f

See more details on using hashes here.

File details

Details for the file deepof-0.1.75-py3-none-any.whl.

File metadata

  • Download URL: deepof-0.1.75-py3-none-any.whl
  • Upload date:
  • Size: 1.3 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.9.13

File hashes

Hashes for deepof-0.1.75-py3-none-any.whl
Algorithm Hash digest
SHA256 ea3bb922dfda6cbf795a460f323bf0a2f4b7eb4954b6f942d8b428031170ff2d
MD5 69cb844a38d707c20e384bef76da3ac5
BLAKE2b-256 fe369f65a8bf240d60477511b921e0c6be7664cbfe7621ccfb5673f5f5f47bba

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page