Skip to main content

Pose estimation and processing SVDs of videos

Project description

Downloads Downloads GitHub stars GitHub forks PyPI version Documentation Status GitHub open issues

Facemap lilmouse

Facemap is a framework for predicting neural activity from mouse orofacial movements. It includes a pose estimation model for tracking distinct keypoints on the mouse face, a neural network model for predicting neural activity using the pose estimates, and also can be used compute the singular value decomposition (SVD) of behavioral videos.

To learn about Facemap, read the paper or check out the tweet thread. For support, please open an issue.

  • For latest released version (from PyPI) including svd processing only, run pip install facemap for headless version or pip install facemap[gui] for using GUI. Note: pip install facemap not yet available for latest tracker and neural model, instead install with pip install git+https://github.com/mouseland/facemap.git

CITATION

If you use Facemap, please cite the Facemap paper:
Syeda, A., Zhong, L., Tung, R., Long, W., Pachitariu, M.*, & Stringer, C.* (2022). Facemap: a framework for modeling neural activity based on orofacial tracking. bioRxiv. [bibtex]

If you use the SVD computation or pupil tracking components, please also cite our previous paper:
Stringer, C.*, Pachitariu, M.*, Steinmetz, N., Reddy, C. B., Carandini, M., & Harris, K. D. (2019). Spontaneous behaviors drive multidimensional, brainwide activity. Science, 364(6437), eaav7893. [bibtex]

Installation

If you have an older facemap environment you can remove it with conda env remove -n facemap before creating a new one.

If you are using a GPU, make sure its drivers and the cuda libraries are correctly installed.

  1. Install an Anaconda distribution of Python. Note you might need to use an anaconda prompt if you did not add anaconda to the path.
  2. Open an anaconda prompt / command prompt which has conda for python 3 in the path
  3. Create a new environment with conda create --name facemap python=3.8. We recommend python 3.8, but python 3.9 and 3.10 will likely work as well.
  4. To activate this new environment, run conda activate facemap
  5. To install the minimal version of facemap, run python -m pip install facemap.
  6. To install facemap and the GUI, run python -m pip install facemap[gui]. If you're on a zsh server, you may need to use ' ' around the facemap[gui] call: `python -m pip install 'facemap[gui]'.

To upgrade facemap (package here), run the following in the environment:

python -m pip install facemap --upgrade

Note you will always have to run conda activate facemap before you run facemap. If you want to run jupyter notebooks in this environment, then also pip install notebook and python -m pip install matplotlib.

You can also try to install facemap and the GUI dependencies from your base environment using the command

python -m pip install facemap[gui]

If you have issues with installation, see the docs for more details. You can also use the facemap environment file included in the repository and create a facemap environment with conda env create -f environment.yml which may solve certain dependency issues.

If these suggestions fail, open an issue.

GPU version (CUDA) on Windows or Linux

If you plan on running many images, you may want to install a GPU version of torch (if it isn't already installed).

Before installing the GPU version, remove the CPU version:

pip uninstall torch

Follow the instructions here to determine what version to install. The Anaconda install is strongly recommended, and then choose the CUDA version that is supported by your GPU (newer GPUs may need newer CUDA versions > 10.2). For instance this command will install the 11.3 version on Linux and Windows (note the torchvision and torchaudio commands are removed because facemap doesn't require them):

conda install pytorch==1.12.1 cudatoolkit=11.3 -c pytorch

and this will install the 11.7 toolkit

conda install pytorch pytorch-cuda=11.7 -c pytorch

Supported videos

Facemap supports grayscale and RGB movies. The software can process multi-camera videos for pose tracking and SVD analysis. Please see example movies for testing the GUI. Movie file extensions supported include:

'.mj2','.mp4','.mkv','.avi','.mpeg','.mpg','.asf'

For more details, please refer to the data acquisition page.

Support

For any issues or questions about Facemap, please open an issue.

I. Pose tracking

tracker

Facemap provides a trained network for tracking distinct keypoints on the mouse face from different camera views (some examples shown below). The process for tracking keypoints is as follows:

  1. Load video. (Optional) Use the file menu to set output folder.
  2. Click process (Note: check keypoints for this step).
  3. Select bounding box to focus on the face as shown below.
  4. The processed keypoints *.h5 file will be saved in the output folder along with the corresponding metadata file *.pkl.

Keypoints will be predicted in the selected bounding box region so please ensure the bounding box focuses on the face. See example frames here.

For more details on using the tracker, please refer to the GUI Instructions. See command line interface (CLI) instructions and for more examples, please see tutorial notebooks.

view1 view2

:mega: User contributions :video_camera: :camera:

Facemap aims to provide a simple and easy-to-use tool for tracking mouse orofacial movements. The tracker's performance for new datasets could be further improved by expand our training set. You can contribute to the model by sharing videos/frames on the following email address(es): asyeda1[at]jh.edu or stringerc[at]janelia.hhmi.org.

II. Neural activity prediction

Facemap includes a deep neural network encoding model for predicting neural activity or principal components of neural activity from mouse orofacial pose estimates extracted using the tracker or SVDs.

The encoding model used for prediction is described as follows:

view1

Please see neural activity prediction tutorial for more details.

III. SVD processing

Facemap provides options for singular value decomposition (SVD) of single and multi-camera videos. SVD analysis can be performed across static frames called movie SVD (movSVD) to extract the spatial components or over the difference between consecutive frames called motion SVD (motSVD) to extract the temporal components of the video. The first 500 principal components from SVD analysis are saved as output along with other variables. For more details, see python tutorial. The process for SVD analysis is as follows:

  1. Load video. (Optional) Use the file menu to set output folder.
  2. Click process (Note: check motSVD or movSVD for this step).
  3. The processed SVD *_proc.npy (and *_proc.mat) file will be saved in the output folder selected.

HOW TO GUI (Python)

(video with old install instructions)

face gif

Run the following command in a terminal

python -m facemap

Default starting folder is set to wherever you run python -m FaceMap

HOW TO GUI (MATLAB)

To start the GUI, run the command MovieGUI in this folder. The following window should appear. After you click an ROI button and draw an area, you have to double-click inside the drawn box to confirm it. To compute the SVD across multiple simultaneously acquired videos you need to use the "multivideo SVD" options to draw ROI's on each video one at a time.

gui screenshot

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

facemap-1.0.0rc1.tar.gz (54.1 MB view hashes)

Uploaded Source

Built Distribution

facemap-1.0.0rc1-py3-none-any.whl (161.9 kB view hashes)

Uploaded Python 3

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page