Consistent Embeddings of high-dimensional Recordings using Auxiliary variables
Project description
Welcome! 👋
CEBRA is a library for estimating Consistent EmBeddings of high-dimensional Recordings using Auxiliary variables. It contains self-supervised learning algorithms implemented in PyTorch, and has support for a variety of different datasets common in biology and neuroscience.
To receive updates on code releases, please 👀 watch or ⭐️ star this repository!
cebra
is a self-supervised method for non-linear clustering that allows for label-informed time series analysis.
It can jointly use behavioral and neural data in a hypothesis- or discovery-driven manner to produce consistent, high-performance latent spaces. While it is not specific to neural and behavioral data, this is the first domain we used the tool in. This application case is to obtain a consistent representation of latent variables driving activity and behavior, improving decoding accuracy of behavioral variables over standard supervised learning, and obtaining embeddings which are robust to domain shifts.
Reference
- 📄 Preprint: Learnable latent embeddings for joint behavioral and neural analysis. Steffen Schneider*, Jin Hwa Lee* and Mackenzie Weygandt Mathis
License
- CEBRA is released for academic use only (please read the license file). If this license is not appropriate for your application, please contact Prof. Mackenzie W. Mathis (mackenzie@post.harvard.edu) and/or the TTO office at EPFL (tto@epfl.ch) for a commercial use license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for cebra-0.2.0rc3-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 12e828c5c7ad83fa6ab5ad7e90a1363253b01ff89c15d0ad7e74c902239ed9de |
|
MD5 | 051494281c40f3b9816bd1b3a660eea5 |
|
BLAKE2b-256 | 822e1e45648362ce1d6672ae822c054865ddf04cdc22291df22e069a82c8cf23 |