FERAL: Feature Extraction for Recognition of Animal Locomotion
Project description
FERAL: Feature Extraction for Recognition of Animal Locomotion
Direct video-to-behavior segmentation — no tracking, no pose estimation.
FERAL (Feature Extraction for Recognition of Animal Locomotion) is an open-source video-understanding toolkit that automatically segments animal behavior directly from raw video, without pose tracking or object detection.
FERAL leverages a foundation video model (V-JEPA2) fine-tuned with an attention-based pooling head to produce frame-level behavioral labels and interpretable ethograms across species, experimental setups, and recording modalities.
Overview
FERAL (Feature Extraction for Recognition of Animal Locomotion) is an open-source video-understanding toolkit that automatically segments animal behavior directly from raw video, without pose tracking or object detection.
FERAL leverages a foundation video model (V-JEPA2) fine-tuned with an attention-based pooling head to produce frame-level behavioral labels and interpretable ethograms across species, experimental setups, and recording modalities.
The pipeline converts: Raw videos → Spatiotemporal features → Frame-resolved behavioral categories
FERAL is designed for ease of use and reproducibility:
- Fully self-contained Google Colab notebook (no installation)
- Modular command-line and Python API
Quick Start (Google Colab)
The easiest way to run FERAL is directly in your browser.
Launch FERAL on Google Colab
Recommended: Colab Pro with an A100 or L4 GPU (free for academics with institutional email)
This notebook provides end-to-end execution:
- Video re-encoding. annotation conversion and dataset validation
- Training on labeled videos
- Inference and ethogram visualization
- Export of predictions as .json
No installation, driver setup, or local environment configuration required.
Manual Installation
If you prefer to run locally or on your own cluster:
git clone https://github.com/Skovorp/feral.git
cd feral
pip install -r requirements.txt
Requirements
- Linux, macOS, or Windows
- Python ≥ 3.10
- PyTorch 2.4 + CUDA 12.4
- NVIDIA GPU with Ampere architecture or newer (compute capability ≥ 8.0, ≥ 24 GB VRAM recommended). FERAL uses bfloat16 and flash attention, which require Ampere+. Older GPUs like V100 (Volta) and T4 (Turing) — including free Google Colab T4 instances — will not work. Supported GPUs include: A100, A10, L4, L40, RTX 3000/4000/5000 series, and newer.
Windows notes
triton-windowsis installed automatically viarequirements.txtas a drop-in replacement for the officialtritonpackage, which has no Windows build.- PyTorch 2.8–2.9 has a known bug on Windows where
torch.compilecrashes withOverflowError: Python int too large to convert to C long(pytorch#162430). Use PyTorch 2.7 or ≥ 2.10 to avoid this. See issue #11 for details.
Dataset Preparation
FERAL expects:
- A folder of re-encoded videos (.mp4, 256 × 256 px)
- A single annotation JSON mapping each frame to a behavioral category
Place both in the same directory.
You can validate dataset structure using our built-in Dataset Validator on getferal.ai.
Training and Inference
Run training
python run.py path_to_videos path_to_labels.json
Monitor metrics in W&B
FERAL automatically logs:
- Validation raster plots (val_raster_plot, ema_val_raster_plot)
- Mean Average Precision (mAP) per class
- EMA vs. non-EMA metrics
- Frame-level ethograms
Outputs
After training, predictions are saved in:
answers/_inference_{run_name}_{timestamp}.json
Each file contains frame-level predicted behaviors suitable for ethogram plotting or downstream analysis.
Example Datasets
FERAL has been validated on multiple datasets:
- CalMS21 – mouse social interactions
- MaBE – multi-species benchmark (mice, beetles, ants, flies)
- C. elegans – locomotor states (forward/reverse/turn/pause)
- Ooceraea biroi – self vs. allogrooming and collective raids
Access details and converters are documented at getferal.ai.
Deployment Options
- Google Colab (recommended) — fastest setup for single-GPU runs
- Local training — custom datasets, full control
- RunPod / Cluster deploy — scalable multi-GPU fine-tuning (see website guide)
Step-by-step deployment guides for each environment are available in the documentation.
Citation
Please contact us at jacopo.razza@gmail.com or peter.skovorodnikov@gmail.com for instructions on how to cite our work.
Authors
Peter Skovorodnikov† (Rockefeller University)† Jacopo Razzauti† (Vosshall Lab, Rockefeller University; Price Family Center for the Social Brain)†
† Equal contribution
Contact: jacopo.razza@gmail.com | peter.skovorodnikov@gmail.com
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file feral-0.1.0.tar.gz.
File metadata
- Download URL: feral-0.1.0.tar.gz
- Upload date:
- Size: 41.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
88420c923e65087970d3146d83416f20003531c91e15e86e599b6d6d6a755d47
|
|
| MD5 |
f977f908a057ccb660815dce7ad14afb
|
|
| BLAKE2b-256 |
72c72a9540acca1723ed37c2945c635a6e824bf01da17343c1d0fc7f45540e2f
|
File details
Details for the file feral-0.1.0-py3-none-any.whl.
File metadata
- Download URL: feral-0.1.0-py3-none-any.whl
- Upload date:
- Size: 40.6 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.11.15
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
0b7d80d2d33afdb0832131553bb334d2089c5705ca2415316b34701bf7547663
|
|
| MD5 |
afd960eb931499af17a625396edb28d6
|
|
| BLAKE2b-256 |
a93716f91ff9094e235f528ea64a28c36c44068bea22fc40d16498ff16cef043
|