Skip to main content

Quantify user-defined behaviors.

Project description

LabGym: quantifying user-defined behaviors

PyPI - Version PyPI - Python Version Downloads Documentation Status

 

alt text

 

Identifies social behaviors in multi-individual interactions

 

Distinguishing different social roles of multiple similar-looking interacting individuals

alt text alt text

 

Distinguishing different interactive behaviors among multiple animal-object interactions

alt text alt text

 

Distinguishing different social roles of animals in the field with unstable recording environments

alt text alt text

 

Identifies non-social behaviors

 

Identifying behaviors in diverse species in various recording environments

alt text alt text alt text alt text

 

Identifying behaviors with no posture changes such as cells 'changing color' and neurons 'firing'

alt text alt text

 

Quantifies each user-defined behavior

Computes a range of motion and kinematics parameters for each behavior. The parameters include count, duration, and latency of behavioral incidents, as well as speed, acceleration, distance traveled, and the intensity and vigor of motions during the behaviors. These parameters are output in spreadsheets.

Also provides visualization of analysis results, including annotated videos/images that visually mark each behavior event, temporal raster plots that show every behavior event of every individual overtime.

alt text

 

A tutorial video for a high-level understanding of what LabGym can do and how it works:

Watch the video

Cite LabGym: LabGym 1.x & LabGym 2.x

 

How to use LabGym?

Overview

You can use LabGym via its user interface (no coding knowledge needed), or via command prompt. See Extended User Guide for details.

If the extended user guide is difficult to follow, see this Practical "How To" Guide with layman language and examples.

 

Put your mouse cursor above each button in the user interface to see a detailed description for it.

alt text

 

LabGym comprises three modules, each tailored to streamline the analysis process. Together, these modules create a cohesive workflow, enabling users to prepare, train, and analyze their behavioral data with accuracy and ease.

  1. 'Preprocessing Module': This module is for optimizing video footage for analysis. It can trim videos to focus solely on the necessary time windows, crop frames to remove irrelevant regions, enhance video contrast to make the relevant details more discernible, reduce video fps to increase processing speed, or draw colored markers in videos to mark specific locations.

  2. 'Training Module': Here, you can customize LabGym according to your specific research needs. You can train a Detector in this module to detect animals or objects of your interest in videos/images. You can also train a Categorizer to recognize specific behaviors that are defined by you.

  3. 'Analysis Module': After customizing LabGym to your need, you can use this module for automated behavioral analysis in videos/images. It not only outputs comprehensive analysis results but also delves into these results to mine significant findings.

 

Usage Step 1: detect animals/objects

LabGym employs two distinct methods for detecting animals or objects in different scenarios.

 

1. Subtract background

This method is fast and accurate but requires stable illumination and static background in videos to analysis. It does not require training neural networks, but you need to define a time window during which the animals are in motion for effective background extraction. A shorter time window leads to quicker processing. Typically, a duration of 10 to 30 seconds is adequate.

How to select an appropriate time window for background extraction?

To determine the optimal time window for background extraction, consider the animal's movement throughout the video. In the example below, in a 60-second video, selecting a 20-second window where the mouse moves frequently and covers different areas is ideal. The following three images are backgrounds extracted using the time windows of the first, second, and last 20 seconds, respectively. In the first and last 20 seconds, the mouse mostly stays either in left or right side and moves little and the extracted backgrounds contain animal trace, which is not ideal. In the second 20 seconds, the mouse frequently moves around and the extracted background is perfect:

alt text alt text alt text alt text

 

2. Use trained Detectors

This method incorporates Detectron2, offering more versatility but at a slower processing speed compared to the ‘Subtract Background’ method. It excels in differentiating individual animals or objects, even during collisions, which is particularly beneficial for the 'Interactive advanced' mode. To enhance processing speed, use a GPU or reduce the frame size during analysis. To train a Detector in ‘Training Module’:

  1. Click the ‘Generate Image Examples’ button to extract image frames from videos.
  2. Use free online annotation tools like Roboflow to annotate the outlines of animals or objects in these images. For annotation type, choose 'Instance Segmentation', and export the annotations in 'COCO instance segmentation' format, which generates a ‘*.json’ file. Importantly, when you generate a version of dataset, do NOT perform any preprocessing steps such as ‘auto orient’ and ‘resize (stretch)’. Instead, perform some augmentation based on which manipulations may occur in real scenarios.
  3. Use the ‘Train Detectors’ button to input these annotated images and commence training your Detectors.

 

Usage Step 2: identify and quantify behaviors

LabGym is equipped with four distinct modes of behavior identification to suit different scenarios.

 

1. Interactive advanced

This mode is for analyzing the behavior of every individual in a group of animals or objects, such as a finger 'holding' or 'offering' a peanut, a chipmunk 'taking' or 'loading' a peanut, and a peanut 'being held', 'being taken', or 'being loaded'.

alt text

To train a Categorizer of this mode, you can sort the behavior examples (Animation and Pattern Image) according to the behaviors/social roles of the 'main character' highlighted in a magenta-color-coded 'spotlight'. In the four pairs of behavior examples below, behaviors are 'taking the offer', 'being taken', 'being held', and 'offering peanut', respectively.

alt text

 

2. Interactive basic

Optimized for speed, this mode considers the entire interactive group (2 or more individuals) as an entity, streamlining the processing time compared to the 'Interactive advanced' mode. It is ideal for scenarios where individual behaviors within the group are uniform or when the specific actions of each member are not the primary focus of the study, such as 'licking' and 'attempted copulation' (where we only need to identify behaviors of the male fly).

alt text

To train a Categorizer of this mode, you can sort the behavior examples (Animation and Pattern Image) according to the behaviors of the entire interacting group or the individual of primary focus of the study. In the 3 pairs of behavior examples below, behaviors are behaviors like 'orientating', 'singing while licking', and 'attempted copulation', respectively.

alt text

 

3. Non-interactive

This mode is for identifying solitary behaviors of individuals that are not engaging in interactive activities.

alt text

To train a Categorizer of this mode, you can sort the behavior examples (Animation and Pattern Image) according to the behaviors of individuals.

alt text

 

4. Static image

This mode is for identifying solitary behaviors of individuals in static images.

 

Installation

LabGym Zoo (trained models and training examples)

Reporting Issues

Changelog

Contributing

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

labgym-2.6.2.tar.gz (101.3 kB view details)

Uploaded Source

Built Distribution

labgym-2.6.2-py3-none-any.whl (113.4 kB view details)

Uploaded Python 3

File details

Details for the file labgym-2.6.2.tar.gz.

File metadata

  • Download URL: labgym-2.6.2.tar.gz
  • Upload date:
  • Size: 101.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for labgym-2.6.2.tar.gz
Algorithm Hash digest
SHA256 496e571cd2c6834769b1e53e0ec70f0230ea6b2c2cc84499562a001e1e111404
MD5 e3df1a5e3634aa2db90984cf4dbd1a6a
BLAKE2b-256 c321fdd1b7decdeca2bfbdc482e837a098d00535cf1960b567ec780bac8506ef

See more details on using hashes here.

Provenance

The following attestation bundles were made for labgym-2.6.2.tar.gz:

Publisher: python-publish.yml on umyelab/LabGym

Attestations:

File details

Details for the file labgym-2.6.2-py3-none-any.whl.

File metadata

  • Download URL: labgym-2.6.2-py3-none-any.whl
  • Upload date:
  • Size: 113.4 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for labgym-2.6.2-py3-none-any.whl
Algorithm Hash digest
SHA256 36ab5d81c37a27d3189535f3a1944034daba71c5b2f369a5be6c37750614bf64
MD5 78e1d1d1e0f74206f6688e3261f8ed07
BLAKE2b-256 004f4a9372ed30b00133ca3d1fa14ab3536af4bd58e0b5a60d5ebd2f034b3d73

See more details on using hashes here.

Provenance

The following attestation bundles were made for labgym-2.6.2-py3-none-any.whl:

Publisher: python-publish.yml on umyelab/LabGym

Attestations:

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page