Skip to main content

A Python package for extracting and decomposing rhythmic facial movements from video.

Project description

Welcome to face-rhythm

PyPI version Downloads Python versions build Documentation Status License: MIT

What is face-rhythm

A Python package that turns videos of facial or other behavior into a small set of interpretable behavioral components.

Why use face-rhythm?

  • Unsupervised. No labels, no model zoo.
  • Interpretable. Each component is a (space × frequency × time) factor you can plot and read off directly.

How to use it

Interactive notebooks:

Command line for batch runs across many sessions:

python scripts/run_pipeline_basic.py --path_params params.json --directory_save /path/to/project/

scripts/params_pipeline_basic.json is a ready-to-edit template.

Python API: see Quick start below, or the full API reference.

Installation

0. Requirements

1. Create a conda environment

conda create -n face_rhythm python=3.12
conda activate face_rhythm
python -m pip install --upgrade pip

Activate the env (conda activate face_rhythm) each time you use face-rhythm.

2. Install video packages

Linux:

conda install -c conda-forge 'torchcodec=*=cpu*' ffmpeg libstdcxx-ng

macOS:

conda install -c conda-forge 'torchcodec=*=cpu*' ffmpeg

Windows: skip this step. torchcodec doesn't explicitly support Windows. Installing it often works, but is not guaranteed. Unless you need ultrafast GPU speeds, just use the 'decord' backend, instead.

3. Install face-rhythm

pip install face-rhythm

For headless servers, GPU acceleration, and installation troubleshooting, see the installation docs.

4. Clone the repo to get the notebooks

git clone https://github.com/RichieHakim/face-rhythm.git

CLI Quick start

import json
import face_rhythm as fr

with open("params_pipeline_basic.json", "r") as f:
    params = json.load(f)

params["project"]["directory_project"] = "/path/to/new/project/"
params["paths_videos"]["directory_videos"] = "/path/to/videos/"
params["ROIs"]["initialize"]["path_file"] = "/path/to/ROIs.h5"

results = fr.pipelines.pipeline_basic(params)

Copy scripts/params_pipeline_basic.json as a template, edit the three paths, and run. Results land in the project directory as HDF5 files plus summary plots.

Upgrading

pip install --upgrade face-rhythm

To update the cloned notebooks/scripts: cd face-rhythm && git pull.

Pipeline at a glance

  1. Read the video frames (face_rhythm.helpers.BufferedVideoReader).
  2. Draw ROIs that pick (a) where to track and (b) what region to crop (face_rhythm.rois).
  3. Track a dense grid of points via optical flow (face_rhythm.point_tracking).
  4. Compute a spectrogram for each point's trajectory (face_rhythm.spectral_analysis).
  5. Factorize the (points × frequency × time) tensor with non-negative TCA (face_rhythm.decomposition).

GPU acceleration (optional)

face-rhythm runs on CPU by default. Install the CPU setup above first.

PyTorch compute: set project.use_GPU: true in your params. Check CUDA with:

python -c "import torch; print(torch.cuda.is_available())"

OpenCV CUDA: build OpenCV plus opencv_contrib with CUDA enabled, then make sure that build is the cv2 imported in this env. Useful links: OpenCV CUDA build options and opencv_contrib.

NVDEC video decoding: (uses experimental libraries). On Linux/NVIDIA systems, try a CUDA torchcodec package, then pass device='cuda' when constructing video readers:

conda install -c conda-forge 'torchcodec=*=cuda130*' ffmpeg libstdcxx-ng

Use cuda126*, cuda129*, or cuda130* to match your driver. Useful links: TorchCodec CUDA decoding and NVIDIA Video Codec SDK.

Citation

If you use face-rhythm in your research, please cite our preprint:

Hakim et al. (2025). Spectral envelopes of facial movements predict intention, cortical representations, and neural prosthetic control. bioRxiv. https://doi.org/10.1101/2025.09.10.675423

BibTeX and a machine-readable CITATION.cff are at the root of the repo.

Contributing

Bug reports, feature requests, and pull requests are welcome. Please open an issue before submitting substantial changes.

License

MIT — see LICENSE.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

face_rhythm-0.3.2.tar.gz (173.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

face_rhythm-0.3.2-py3-none-any.whl (166.5 kB view details)

Uploaded Python 3

File details

Details for the file face_rhythm-0.3.2.tar.gz.

File metadata

  • Download URL: face_rhythm-0.3.2.tar.gz
  • Upload date:
  • Size: 173.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for face_rhythm-0.3.2.tar.gz
Algorithm Hash digest
SHA256 80195ac59b82f1a4052d3843ddf4aff05d77d4571a368549c261859b2a55f2fa
MD5 40c19474f9d85dbfe31f204fad69ab0e
BLAKE2b-256 74c75a357c005d292933de5a54fea7cb1371f9eaee5ed2464389535946e9ebda

See more details on using hashes here.

Provenance

The following attestation bundles were made for face_rhythm-0.3.2.tar.gz:

Publisher: pypi_release.yml on RichieHakim/face-rhythm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

File details

Details for the file face_rhythm-0.3.2-py3-none-any.whl.

File metadata

  • Download URL: face_rhythm-0.3.2-py3-none-any.whl
  • Upload date:
  • Size: 166.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/6.1.0 CPython/3.13.12

File hashes

Hashes for face_rhythm-0.3.2-py3-none-any.whl
Algorithm Hash digest
SHA256 b41b5f110f354d31b4f98e9f063faeed1b2c0a653c92077f93ec121f22419fe5
MD5 f2f62ad803d10373bfaaab361ae78872
BLAKE2b-256 d2e847a23fa67db8a0fb0e095d629201af62635c7e5768090ab0a266ec53fe93

See more details on using hashes here.

Provenance

The following attestation bundles were made for face_rhythm-0.3.2-py3-none-any.whl:

Publisher: pypi_release.yml on RichieHakim/face-rhythm

Attestations: Values shown here reflect the state when the release was signed and may no longer be current.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page