A pipeline for analysis of facial behavior using optical flow
Project description
Face-Rhythm
Learn more at https://face-rhythm.readthedocs.io/
Installation
0. Requirements
- Operating system:
- Ubuntu >= 18.04 (other linux versions usually okay but not actively maintained)
- Windows >= 10
- Mac >= 12
- Anaconda or Miniconda.
- If using linux/unix: GCC >= 5.4.0, ideally == 9.2.0. Google how to do this on your operating system. Check with:
gcc --version
. - Optional: CUDA compatible NVIDIA GPU and drivers. Using a GPU can increase the speeds for the TCA step, but is not necessary.
- The below commands should be run in the terminal (Mac/Linux) or Anaconda Prompt (Windows).
1. Clone this repo
git clone https://github.com/RichieHakim/face-rhythm/
cd face-rhythm
2. Create a conda environment
conda env create --file environment.yml
In either case, this step will create a conda environment named face-rhythm. Activate it:
conda activate face_rhythm
3. Run the set up script
pip install -e .
Usage
1. Create a "project directory" where we will save intermediate files, videos, and config files.
This project directory should ideally be outside of the repo, and you'll create a new one each time
you analyze a new dataset. You may want to save a copy of the .ipynb file you use for the run there.
cd directory/where/you/want/to/save/your/project
mkdir face_rhythm_run
2. Copy the interactive notebook to your project directory
We recommend copying the interactive notebook from your face-rhythm repository to your project folder each time you make a new project. This will allow you to have one notebook per project, which will keep your analyses from potentially conflicting if you run different datasets through the same notebooks.
cp /path to face-rhythm repo/face-rhythm/notebooks/interactive_pipeline_basic.ipynb /path to project/face_rhythm_run/
interactive_pipeline_basic.ipynb
is a basic demo notebook that runs through the entire pipeline.
See the notebooks/other
folder for some notebooks demonstrating other kinds of analyses. These are more experimental and are subject to change as we develop new analyses.
3. Open up jupyter notebook! The plots display better using Jupyter Notebook than Jupyter Lab or VSCode.
jupyter notebook
If you run into a kernel error at this stage and are a Windows user, check out:
https://jupyter-notebook.readthedocs.io/en/stable/troubleshooting.html#pywin32-issues
Navigate to your folder containing your interactive notebook and launch it by clicking on it!
Repository Organization
face-rhythm
├── notebooks <- Jupyter notebooks containing the main pipeline and some demos.
| ├── basic_face_rhythm_notebook.ipynb <- Main pipeline notebook.
| └── demo_align_temporal_factors.ipynb <- Demo notebook for aligning temporal factors.
|
├── face-rhythm <- Source code for use in this project.
│ ├── project.py <- Contains methods for project directory organization and preparation
│ ├── data_importing.py <- Contains classes for importing data (like videos)
| ├── rois.py <- Contains classes for defining regions of interest (ROIs) to analyze
| ├── point_tracking.py <- Contains classes for tracking points in videos
| ├── spectral_analysis.py <- Contains classes for spectral decomposition
| ├── decomposition.py <- Contains classes for TCA decomposition
| ├── utils.py <- Contains utility functions for face-rhythm
| ├── visualization.py <- Contains classes for visualizing data
| ├── helpers.py <- Contains general helper functions (non-face-rhythm specific)
| ├── h5_handling.py <- Contains classes for handling h5 files
│ └── __init__.py <- Makes src a Python module
|
├── setup.py <- makes project pip installable (pip install -e .) so src can be imported
├── LICENSE <- License file
├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── docs <- A default Sphinx project; see sphinx-doc.org for details
└── tox.ini <- tox file with settings for running tox; see tox.readthedocs.io
Project Directory Organization
Project Directory
├── config.yaml <- Configuration parameters to run each module in the pipeline. Dictionary.
├── run_info.json <- Output information from each module. Dictionary.
│
├── run_data <- Output data from each module.
│ ├── Dataset_videos.h5 <- Output data from Dataset_videos class. Contains metadata about the videos.
│ ├── ROIs.h5 <- Output data from ROIs class. Contains ROI masks.
│ ├── PointTracker.h5 <- Output data from PointTracker class. Contains point tracking data.
| ├── VQT_Analyzer.h5 <- Output data from VQT_Analyzer class. Contains spectral decomposition data.
│ ├── TCA.h5 <- Output data from TCA class. Contains TCA decomposition data.
│
└── visualizations <- Output visualizations.
├── factors_rearranged_[frequency].png <- Example of a rearranged factor plot.
└── point_tracking_demo.avi <- Example video.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for face_rhythm-0.2.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6326926ec3ed4f0bbe3d45310497b00b269b32e764ebaec5b97c5b07b2a13a3c |
|
MD5 | 8bf9f6f5ad058c14d6ee418552ac86c9 |
|
BLAKE2b-256 | 640d10babf7bca31e0f1d99f0b8df10a719abb9eac1e52da9fc8878bc208ff1a |