Skip to main content

Timeseries synchronisation toolkit for computational neuroscience

Project description

Theonerig

#hide
from theonerig.core import *
from theonerig.testdata import *

locals().update(load_vivo_2p("./files/vivo_2p"))
Importing the record master

Returning stim_d, S_matrix, A_matrix, proj_TP, proj_DATA, eye_TP, eye_DATA, treadm_DATA, len_records, rec_TP, reM

Install

So far there is no easy install, just clone the folder form github and within the folder install it with pip. We also recommend you to create an environment with conda:

conda create -n tor python=3.6
activate tor for windows or source activate tor for linux/mac
pip install packaging
pip install -e .

Later we will put it on pip so you can install it with pip install theonerig

How to use

Some example data are located in the “files” folder. We are gonna use data from the subfolder “vivo_2p”, acquired by the Asari Lab @ EMBL Rome.

The main idea behind this library is to use a main timeserie to synchronize other data streams from various source. Once done, it allows easy slicing of the data and apply to it any processing.

Slicing made easy

An experiment is stored in a Record_Master object (called here reM): Each row shows a stream of data aligned to “main_tp”.

Data can be sparse, meaning that you don’t necessarly possess data of each row for the total duration of the record, and can be in multiple chuncks.

reM.plot()

Now that we have such dataset, we will use the second main feature of this package, the Data_Pipe. With it, we choose at its creation which rows we want to obtain. In that case, we take “checkerboard” which is a matrix of the stimulus values , the “S_matrix” which is the response of neurons extracted from calcium imaging, and “eye_tracking” to take in account the mouse eye position to compute the response.

pipe = Data_Pipe(reM, ["checkerboard", "S_matrix", "eye_tracking"])

Now that the pipe is defined, we can use aritmetic and logic operations to choose which part of the record we want data from:

pipe += "checkerboard" #Add part of the data where checkerboard is present
reM.plot()
pipe.plot()

pipe[0]["S_matrix"].shape
(36000, 2)
#Select all cell responses where there is no stimulus

pipe += "S_matrix" 
pipe -= "stim" #use the fact that data are within a class [sync, data, stim, cell] to filter them all at the same time
reM.plot()
pipe.plot()

#Select all cell responses where there is a stimulus. Note the darkness stimulus longer 
#than the corresponding S_matrix

pipe += "S_matrix" #Add all the chuncks of data where there is an S_matrix
pipe &= "stim" #use the fact that data are within a class [sync, data, stim, cell] to filter them all at the same time
reM.plot()
pipe.plot()

Then, the pipe can be iterated and return each separated chunk of data as a dictionnary containg each data selected

print(pipe[0].keys())
for data_dict in pipe:
    print(data_dict["checkerboard"].shape, data_dict["S_matrix"].shape, data_dict["eye_tracking"].shape)
dict_keys(['checkerboard', 'S_matrix', 'eye_tracking'])
(23303, 15, 20) (23303, 2) (23303, 5)
(36000, 15, 20) (36000, 2) (36000, 5)
(36000, 15, 20) (36000, 2) (36000, 5)
(40800, 15, 20) (40800, 2) (40800, 5)
(10200, 15, 20) (10200, 2) (10200, 5)
(8680, 15, 20) (8680, 2) (8680, 5)
(18000, 15, 20) (18000, 2) (18000, 5)

Note here the checkerboard. We possess actual data for only one chunk, but because default values are set for each dataset, the pipe is able to return a dataset for each part of the record. This allows to easily workaround records with missing data without crashing.

Export your synchronise records or import the records of your friends

Once the reM is in its final state, export it as such

export_record("path/for/my/record/reM_coolname.h5", reM)

As you can see, the record is in a .h5 format, so it can be explored and imported with any other H5 solution.
But the best way is still to open it with theonerig!

import_record("path/for/my/record/reM_coolname.h5")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

theonerig-1.0.1.tar.gz (93.1 kB view details)

Uploaded Source

Built Distribution

theonerig-1.0.1-py3-none-any.whl (97.2 kB view details)

Uploaded Python 3

File details

Details for the file theonerig-1.0.1.tar.gz.

File metadata

  • Download URL: theonerig-1.0.1.tar.gz
  • Upload date:
  • Size: 93.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for theonerig-1.0.1.tar.gz
Algorithm Hash digest
SHA256 8bd57a806350603a0e4852d1cd223ad306fb4f4b78ad3617e9bc1386a236a7e4
MD5 d0d4bdfccb27e791642d015d111f5e48
BLAKE2b-256 c7539527111d507cf710645b62f828a48a197f13ccb41049bca5aef3b5661f10

See more details on using hashes here.

File details

Details for the file theonerig-1.0.1-py3-none-any.whl.

File metadata

  • Download URL: theonerig-1.0.1-py3-none-any.whl
  • Upload date:
  • Size: 97.2 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for theonerig-1.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 07709fdfd36a7f2e16ea999242e6a23ccad500addd71c82407b38d85a0f9c335
MD5 c3a123d5feee0eaa101b5d6034dd16d5
BLAKE2b-256 65a83116e9dcb40afbaea017bcd60d23a8a9226982e061f22802be6ef8926dc7

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page