Skip to main content

Unsupervised clustering algorithm for 2D data (neurons by time)

Project description

Rastermap

tests codecov PyPI version Downloads Downloads Python version Licence: GPL v3 Contributors repo size GitHub stars GitHub forks

Rastermap is a discovery algorithm for neural data. The algorithm was written by Carsen Stringer and Marius Pachitariu. For support, please open an issue. Please see install instructions below. If you use Rastermap in your work, please cite the paper:

Stringer C., Zhong L., Syeda A., Du F., & Pachitariu M. (2023). Rastermap: a discovery method for neural population recordings. bioRxiv 2023.07.25.550571; doi: https://doi.org/10.1101/2023.07.25.550571

Rastermap runs in python 3.8+ and has a graphical user interface (GUI) for running it easily. Rastermap can also be run in a jupyter notebook locally or on google colab, see these demos:

All demo data available here.

Here is what the output looks like for a segment of a mesoscope recording in a mouse during spontaneous activity (3.2Hz sampling rate), compared to random neural sorting:

random sorting and rastermap sorting of spontaneous activity

Here is what the output looks like for a recording of wholebrain neural activity in a larval zebrafish from Chen, Mu, Hu, Kuan et al 2018 (dataset here). The plot on the left shows the sorted activity, and the right plot is the 2D positions of the neurons in the tissue, divided into 18 clusters according to their 1D position in the Rastermap embedding:

wholebrain neural activity from a zebrafish sorted by rastermap

Installation

Local installation (< 2 minutes)

System requirements

Linux, Windows and Mac OS are supported for running the code. For running the graphical interface you will need a Mac OS later than Yosemite. At least 8GB of RAM is recommended to run the software. 16GB-32GB may be required for larger datasets. The software has been heavily tested on Windows 10 and Ubuntu 20.04 and less well-tested on Mac OS. Please open an issue if you have problems with installation.

Instructions

Recommended to install an Anaconda distribution of Python -- Choose Python 3.x and your operating system. Note you might need to use an anaconda prompt (windows) if you did not add anaconda to the path. Open an anaconda prompt / command prompt with python 3 in the path, then:

pip install rastermap

For the GUI

pip install rastermap[gui]

Rastermap has only a few dependencies so you may not need to make a special environment for it (e.g. it should work in a suite2p or facemap environment), but if the pip install above does not work, please follow these instructions:

  1. Open an anaconda prompt / command prompt with conda for python 3 in the path.
  2. Create a new environment with conda create --name rastermap python=3.8. Python 3.9 and 3.10 will likely work fine as well.
  3. To activate this new environment, run conda activate rastermap
  4. To install the minimal version of rastermap, run pip install rastermap.
  5. To install rastermap and the GUI, run pip install rastermap[gui]. If you're on a zsh server, you may need to use ' ' around the rastermap[gui] call: pip install 'rastermap[gui]'.

To upgrade rastermap (package here), run the following in the environment:

pip install rastermap --upgrade

If you have an older rastermap environment you can remove it with conda env remove -n rastermap before creating a new one.

Note you will always have to run conda activate rastermap before you run rastermap. If you want to run jupyter notebooks in this environment, then also pip install notebook.

Dependencies

This package relies on the awesomeness of numpy, scipy, numba, scikit-learn, PyQt6, PyQt6.sip and pyqtgraph. You can pip install or conda install all of these packages. If having issues with PyQt6, then make an Anaconda environment and try to install within it conda install pyqt. On Ubuntu you may need to sudo apt-get install libegl1 to support PyQt6. Alternatively, you can use PyQt5 by running pip uninstall PyQt6 and pip install PyQt5. If you already have a PyQt version installed, Rastermap will not install a new one.

Using rastermap

GUI

The quickest way to start is to open the GUI from a command line terminal. You might need to open an anaconda prompt if you did not add anaconda to the path. Then run:

python -m rastermap

To start using the GUI, save your data into an npy file that is just a matrix that is neurons x timepoints. Then "File > Load data matrix" and choose this file (or drag and drop your file). Next click "Run > Run rastermap" and click run. See the parameters section to learn about the parameters.

The GUI will start with a highlighted region that you can drag to visualize the average activity of neurons in a given part of the plot. To draw more regions, you right-click to start a region, then right-click to end it. The neurons' activity traces then show up on the botton of the GUI, and if the neuron positions are loaded, you will see them colored by the region color. You can delete a region by holding CTRL and clicking on it. You can save the ROIs you've drawn with the "Save > Save processed data" button. They will save along with the embedding so you can reload the file with the "Load processed data" option.

NOTE: If you are using suite2p "spks.npy", then the GUI will automatically use the "iscell.npy" file in the same folder to subsample your recording with the chosen neurons, and will automatically load the neuron positions from the "stat.npy" file.

GUI examples:

zebrafish:

wholebrain neural activity from a zebrafish sorted by rastermap

mouse sensorimotor activity:

sensorimotor neural activity from a mouse sorted by rastermap

rat hippocampus:

hippocampal neural activity from a rat sorted by rastermap

mouse widefield:

widefield neural activity from a mouse sorted by rastermap

In a notebook

For this, pip install notebook and pip install matplotlib. See example notebooks for full examples.

Short example code snippet for running rastermap:

import numpy as np
import matplotlib.pyplot as plt
from rastermap import Rastermap, utils
from scipy.stats import zscore

# spks is neurons by time
spks = np.load("spks.npy").astype("float32")
spks = zscore(spks, axis=1)

# fit rastermap
model = Rastermap(n_PCs=200, n_clusters=100, 
                  locality=0.75, time_lag_window=5).fit(spks)
y = model.embedding # neurons x 1
isort = model.isort

# bin over neurons
X_embedding = zscore(utils.bin1d(spks, bin_size=25, axis=0), axis=1)

# plot
fig = plt.figure(figsize=(12,5))
ax = fig.add_subplot(111)
ax.imshow(X_embedding, vmin=0, vmax=1.5, cmap="gray_r", aspect="auto")

If you are using google colab, you can mount your google drive and use your data from there with the following command, you will then see your files in the left bar under drive:

from google.colab import drive
drive.mount('/content/drive')

From the command line

Save an "ops.npy" file with the parameters and a "spks.npy" file with a matrix of neurons by time, and run

python -m rastermap --S spks.npy --ops ops.npy

Inputs

Most of the time you will input to Rastermap().fit a matrix of neurons by time. For more details, these are all the inputs to the function:

  • data : array, shape (n_samples, n_features) (optional, default None) this matrix is usually neurons/voxels by time, or None if using decomposition, e.g. as in widefield imaging
  • Usv : array, shape (n_samples, n_PCs) (optional, default None) singular vectors U times singular values sv
  • Vsv : array, shape (n_features, n_PCs) (optional, default None) singular vectors U times singular values sv
  • U_nodes : array, shape (n_clusters, n_PCs) (optional, default None) cluster centers in PC space, if you have precomputed them
  • itrain : array, shape (n_features,) (optional, default None) fit embedding on timepoints itrain only

Settings

These are inputs to the Rastermap class initialization, the settings are sorted in order of importance (you will probably never need to change any other than the first few):

  • n_clusters : int, optional (default: 100) number of clusters created from data before upsampling and creating embedding (any number above 150 will be slow due to NP-hard sorting problem, max is 200)
  • n_PCs : int, optional (default: 200) number of PCs to use during optimization
  • time_lag_window : int, optional (default: 0) number of time points into the future to compute cross-correlation, useful to set to several timepoints for sequence finding
  • locality : float, optional (default: 0.0) how local should the algorithm be -- set to 1.0 for highly local + sequence finding, and 0.0 for global sorting
  • grid_upsample : int, optional (default: 10) how much to upsample clusters, if set to 0.0 then no upsampling
  • time_bin : int, optional (default: 0) binning of data in time before PCA is computed, if set to 0 or 1 no binning occurs
  • mean_time : bool, optional (default: True) whether to project out the mean over data samples at each timepoint, usually good to keep on to find structure
  • n_splits : int, optional (default: 0) split, recluster and sort n_splits times (increases local neighborhood preservation for high-dim data); results in (n_clusters * 2**n_splits) clusters
  • run_scaled_kmeans : bool, optional (default: True) run scaled_kmeans as clustering algorithm; if False, run kmeans
  • verbose : bool (default: True) whether to output progress during optimization
  • verbose_sorting : bool (default: False) output progress in travelling salesman
  • keep_norm_X : bool, optional (default: True) keep normalized version of X saved as member of class
  • bin_size : int, optional (default: 0) binning of data across n_samples to return embedding figure, X_embedding; if 0, then binning based on data size, if 1 then no binning
  • symmetric : bool, optional (default: False) if False, use only positive time lag cross-correlations for sorting (only makes a difference if time_lag_window > 0); recommended to keep False for sequence finding
  • sticky : bool, optional (default: True) if n_splits>0, sticky=True keeps neurons in same place as initial sorting before splitting; otherwise neurons can move each split (which generally does not work as well)
  • nc_splits : int, optional (default: None) if n_splits > 0, size to split n_clusters into; if None, nc_splits = min(50, n_clusters // 4)
  • smoothness : int, optional (default: 1) how much to smooth over clusters when upsampling, number from 1 to number of clusters (recommended to not change, instead use locality to change sorting)

Outputs

The main output you want is the sorting, isort, which is assigned to the Rastermap class, e.g.

model = Rastermap().fit(spks)
isort = model.isort

You may also want to color the neurons by their positions which are in embedding, e.g.

y = model.embedding[:,0]
plt.scatter(xpos, ypos, cmap="gist_rainbow", c=y, s=1)

Here is the list of all variables assigned from fit:

  • embedding : array, shape (n_samples, 1) embedding of each neuron / voxel
  • isort : array, shape (n_samples,) sorting along first dimension of input matrix - use this to get neuron / voxel sorting
  • igood : array, shape (n_samples, 1) neurons/voxels which had non-zero activity and were used for sorting
  • Usv : array, shape (n_samples, n_PCs) singular vectors U times singular values sv
  • Vsv : array, shape (n_features, n_PCs) singular vectors U times singular values sv
  • U_nodes : array, shape (n_clusters, n_PCs) cluster centers in PC space
  • Y_nodes : array, shape (n_clusters, 1) np.arange(0, n_clusters)
  • X_nodes : array, shape (n_clusters, n_features) cluster activity traces in time
  • cc : array, shape (n_clusters, n_clusters) sorted asymmetric similarity matrix
  • embedding_clust : array, shape (n_samples, 1) assignment of each neuron/voxel to each cluster (before upsampling)
  • X : array, shape (n_samples, n_features) normalized data stored (if keep_norm_X is True)
  • X_embedding : array, shape (n_samples//bin_size, n_features) normalized data binned across samples (if compute_X_embedding is True)

The output from the GUI and the command line is a file that ends with _embedding.npy. This file contains:

  • filename: str, path to file that rastermap was run on
  • save_path: str, folder with filename
  • embedding : array, shape (n_samples, 1) embedding of each neuron / voxel
  • isort : array, shape (n_samples,) sorting along first dimension of input matrix - use this to get neuron / voxel sorting
  • user_clusters: list, list of user drawn clusters in GUI
  • ops: dict, dictionary of options used to run rastermap

License

Copyright (C) 2023 Howard Hughes Medical Institute Janelia Research Campus, the labs of Carsen Stringer and Marius Pachitariu.

This code is licensed under GPL v3 (no redistribution without credit, and no redistribution in private repos, see the license for more details).

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rastermap-0.9.3.tar.gz (144.1 kB view details)

Uploaded Source

Built Distribution

rastermap-0.9.3-py3-none-any.whl (88.0 kB view details)

Uploaded Python 3

File details

Details for the file rastermap-0.9.3.tar.gz.

File metadata

  • Download URL: rastermap-0.9.3.tar.gz
  • Upload date:
  • Size: 144.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for rastermap-0.9.3.tar.gz
Algorithm Hash digest
SHA256 488372d981538cbde010546033174062595030d477c307b2cf347f97a5d414ab
MD5 a23249f0fb5dbca5a444bb712c0a1ab1
BLAKE2b-256 51c90169204fee211b14c3c0fd4cd20c664ab3cbebee7f77b56aad83f7425929

See more details on using hashes here.

File details

Details for the file rastermap-0.9.3-py3-none-any.whl.

File metadata

  • Download URL: rastermap-0.9.3-py3-none-any.whl
  • Upload date:
  • Size: 88.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.11.4

File hashes

Hashes for rastermap-0.9.3-py3-none-any.whl
Algorithm Hash digest
SHA256 3355de331e77d13282af8e53de3a0ecf79973c56475a304d5f54e5c3da1eae93
MD5 8bd861478853cd316de80b49d7b60001
BLAKE2b-256 d95a41dcc779c42022250ce129ba47918005e5ff849acacff5d9ba5855f184d7

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page