Consistent Embeddings of high-dimensional Recordings using Auxiliary variables
Project description
Welcome! 👋
CEBRA is a library for estimating Consistent EmBeddings of high-dimensional Recordings using Auxiliary variables. It contains self-supervised learning algorithms implemented in PyTorch, and has support for a variety of different datasets common in biology and neuroscience.
To receive updates on code releases, please 👀 watch or ⭐️ star this repository!
cebra
is a self-supervised method for non-linear clustering that allows for label-informed time series analysis.
It can jointly use behavioral and neural data in a hypothesis- or discovery-driven manner to produce consistent, high-performance latent spaces. While it is not specific to neural and behavioral data, this is the first domain we used the tool in. This application case is to obtain a consistent representation of latent variables driving activity and behavior, improving decoding accuracy of behavioral variables over standard supervised learning, and obtaining embeddings which are robust to domain shifts.
Reference
- 📄 Preprint: Learnable latent embeddings for joint behavioral and neural analysis. Steffen Schneider*, Jin Hwa Lee* and Mackenzie Weygandt Mathis
License
- CEBRA is released for academic use only (please read the license file). If this license is not appropriate for your application, please contact Prof. Mackenzie W. Mathis (mackenzie@post.harvard.edu) and/or the TTO office at EPFL (tto@epfl.ch) for a commercial use license.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for cebra-0.1.0-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | a09a7ad17420dc7c0acbd529618ad12d80f29f81d8b862d7d96ae8fb963869e8 |
|
MD5 | a87c26bd55f0cdca559f54187773e28f |
|
BLAKE2b-256 | 4c014c6c76a75af11beae48d099e2ce93884e48891e085751ba42fa216b1b7d9 |