Skip to main content

A pipeline for analysis of facial behavior using optical flow

Project description

Face-Rhythm

Learn more at https://face-rhythm.readthedocs.io/




Installation

0. Requirements

  • Anaconda or Miniconda
  • GCC >= 5.4.0, ideally == 9.2.0. Google how to do this on your operating system. For unix/linux: check with gcc --version.
  • For GPU support, you just need a CUDA compatible NVIDIA GPU and the relevant drivers. There is no need to download CUDA or CUDNN as PyTorch takes care of this during the installation. Using a GPU is not required, but can increase speeds 2-20x depending on the GPU and your data. See https://developer.nvidia.com/cuda-gpus for a list of compatible GPUs.
  • On some Linux servers (like Harvard's O2 server), you may need to load modules instead of installing. To load conda, gcc, try: module load conda3/latest gcc/9.2.0 or similar.

1. Clone this repo

git clone https://github.com/RichieHakim/face-rhythm/
cd face-rhythm

2. Create a conda environment

3A. Install dependencies with GPU support (recommended)

conda env create --file environment_GPU.yml

3B. Install dependencies with only CPU support

conda env create --file environment_CPU_only.yml

3. Run the set up script

pip install -e .



Usage

1. Create a "project directory" where we will save intermediate files, videos, and config files.

This project directory should ideally be outside of the repo, and you'll create a new one each time you analyze a new dataset. You may want to save a copy of the .ipynb file you use for the run there. cd directory/where/you/want/to/save/your/project
mkdir face_rhythm_run

2. Open up jupyter notebook! The plots display better using Jupyter Notebook than Jupyter Lab or VSCode.

jupyter notebook
If you run into a kernel error at this stage and are a Windows user, check out: https://jupyter-notebook.readthedocs.io/en/stable/troubleshooting.html#pywin32-issues

3. Open up a demo notebook and run it!

  • basic_face_rhythm_notebook.ipynb is a basic demo notebook that runs through the entire pipeline.
  • demo_align_temporal_factors.ipynb is a demo notebook that shows how to align the temporal factors that are output from the basic pipeline.


Repository Organization

face-rhythm
├── notebooks  <- Jupyter notebooks containing the main pipeline and some demos.
|   ├── basic_face_rhythm_notebook.ipynb  <- Main pipeline notebook.
|   └── demo_align_temporal_factors.ipynb <- Demo notebook for aligning temporal factors.
|
├── face-rhythm  <- Source code for use in this project.
│   ├── project.py           <- Contains methods for project directory organization and preparation
│   ├── data_importing.py    <- Contains classes for importing data (like videos)
|   ├── rois.py              <- Contains classes for defining regions of interest (ROIs) to analyze
|   ├── point_tracking.py    <- Contains classes for tracking points in videos
|   ├── spectral_analysis.py <- Contains classes for spectral decomposition
|   ├── decomposition.py     <- Contains classes for TCA decomposition
|   ├── utils.py             <- Contains utility functions for face-rhythm
|   ├── visualization.py     <- Contains classes for visualizing data
|   ├── helpers.py           <- Contains general helper functions (non-face-rhythm specific)
|   ├── h5_handling.py       <- Contains classes for handling h5 files
│   └── __init__.py          <- Makes src a Python module    
|
├── setup.py   <- makes project pip installable (pip install -e .) so src can be imported
├── LICENSE    <- License file
├── Makefile   <- Makefile with commands like `make data` or `make train`
├── README.md  <- The top-level README for developers using this project.
├── docs       <- A default Sphinx project; see sphinx-doc.org for details
└── tox.ini    <- tox file with settings for running tox; see tox.readthedocs.io


Project Directory Organization

Project Directory
├── config.yaml           <- Configuration parameters to run each module in the pipeline. Dictionary.
├── run_info.json         <- Output information from each module. Dictionary.
│
├── run_data              <- Output data from each module.
│   ├── Dataset_videos.h5 <- Output data from Dataset_videos class. Contains metadata about the videos.
│   ├── ROIs.h5           <- Output data from ROIs class. Contains ROI masks.
│   ├── PointTracker.h5   <- Output data from PointTracker class. Contains point tracking data.
|   ├── VQT_Analyzer.h5   <- Output data from VQT_Analyzer class. Contains spectral decomposition data.
│   ├── TCA.h5            <- Output data from TCA class. Contains TCA decomposition data.
│   
└── visualizations        <- Output visualizations.
    ├── factors_rearranged_[frequency].png  <- Example of a rearranged factor plot.
    └── point_tracking_demo.avi             <- Example video.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

face_rhythm-0.1.0.tar.gz (87.9 kB view hashes)

Uploaded Source

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page