Streamlined, modular, multi-source YOLO training & detection pipeline for research.
Project description
AVIAN
Automated Visual Inference & Activity Network
An open-source, automated animal-behavior detection pipeline.
Overview
AVIAN (1.1.17) is a research-oriented, Ultralytics-based pipeline designed to make custom deep-learning model training & behavioral detection accessible to field & laboratory researchers.
Supports:
- Multi-source real-time inference (video & live camera feeds).
- Structured logging of detections, interactions, & per-frame aggregate statistics.
- Automatic metadata extraction for precise timestamping for video & camera sources.
- Full configurability & modular design for research reproducibility.
This project remains open-source & under active development as part of an undergraduate research initiative. Contributions & feedback are always welcome!
Features
Model Training
- Supports transfer learning or training from scratch of an existing model.
- Automatically exports training metrics to:
Weights & Biases(W&B)quick-summary.txt(local lightweight summary)
- Supports aggressive data augmentation & auto-detection of new data for retraining.
Detection Pipeline
- Multi-threaded inference across multiple sources (camera feeds & videos).
- Metadata-aware timestamping for accurate temporally-aligned measurements.
- Centralized message handling using unified
DetectUIfor all info, warnings, errors, & save confirmations. - Robust exception handling for model initialization, frame errors, & I/O failures.
Classes & Configuration
- The pipeline uses user-defined class configurations:
FOCUS_CLASSES: primary subjects (e.g., animal species)CONTEXT_CLASSES: contextual or environmental elements (e.g., feeders, water trays, etc)
- Class lists are stored in & managed through
classes_config.yamlwithin the config folder, allowing for easy modification without editing code.
Here is an example of a default classes config YAML file:
FOCUS_CLASSES:
- F
- M
- Feeder
- Main_Perch
- Nesting_Box
- Sky_Perch
- Wooden_Perch
CONTEXT_CLASSES: []
A model's class list is extracted straight from its model.pt file to prepare a class_config.yaml file, which will be located in the /configs/<model_name> path.
This allows for a class list to be divided between focus & context classes that simplifies output statistics & terminal logs.
This class setup is intended to set specific classes as objects for focus classes to interact with, giving context to those interactions primarily.
Please ensure that the [] are removed if defining context classes!
Measurement System
- Data collection centralized in single helper utility that handles:
- Passive presence tracked by logging current class counts on a user-defined time interval.
- Interval-level aggregation for logging raw detection tracks within a user-defined time interval.
- Session summaries
- Interaction tracking (focus vs. context classes)
- Motion
- Exports structured
.csvsummaries:counts.csv,average_counts.csvinterval_results.csv,session_summary.csvinteractions.csvSupports automatic calculation of ratios (e.g., M:F) & normalized detection rates.
Directory and Output Structure
Integrates a clean, timestamped log structure for both camera feeds & videos:
Camera sources:
/AVIAN/logs/<model_name>/measurements/camera-feed/<usb>/<system_timestamp>/<measurements>/
├── recordings/
│ └── <usb>.mp4
└── scores/
├── <usb>_metadata.jsonw=
├── counts/
│ ├── counts.csv
│ ├── average_counts.csv
│ ├── frame_counts.csv
│ └── session_summary.csv
├── interactions/
│ └── interactions.csv
├── motion/
├── motion_counts.csv
├── motion_intensity.csv
└── motion_prevalence.csv
Video sources:
/AVIAN/logs/(model_name)/measurements/video-in/<video>/<video_timestamp>/measurements/
├── recordings/
│ └── <video>.mp4
└── scores/
├── source_metadata.json
├── frame-data/
│ ├── interval_results.csv
│ └── session_summary.csv
├── counts/
│ ├── counts.csv
│ └── average_counts.csv
└── interactions/
└── interactions.csv
- Folder names are automatically sanitized to avoid filesystem errors.
- Each source has its own isolated measurement subdirectory.
Terminal UI
Note that AVIAN is a headless detection pipeline, meaning that live display windows will not appear while running inference. Instead, the terminal logs & tracks initiation, FPS, & basic statistics.
Here is an example of what to expect from the terminal:
Detection
----------------
model: <model>
<video>: Frames:-- | FPS:-- | Time:-- | ETA:--
class1:-
class2:-
OBJECTS:-
<usb>: Frames:-- | FPS:-- | Time:--
class1:-
class2:-
OBJECTS:-
------------------------------------------------------------------------------------------------
model: <model>
info: Loaded 6 classes: ['class1', 'class2', 'class3', 'class4', 'class5', 'class6']
info: Recording initialized at mm/dd/yyy hh:mm:ss
info: Source '<video>' completed.
------------------------------------------------------------------------------------------------
exit: Stop signal received. Terminating pipeline...
exit: Saving CSV spreadsheets...
------------------------------------------------------------------------------------------------
model: <model>
save: Measurements for <video>
save: Measurements saved to: "measurements/video-in/<video>/<video_timestamp>/scores"
- <video>.mp4
- <video>_metadata.json
- counts.csv
- average_counts.csv
- interval_results.csv
- session_summary.csv
- interactions.csv
save: Measurements for <usb>
save: Measurements saved to: "measurements/camera-feeds/<usb>/<system_timestamp>/scores"
- <usb>.mp4
- <usb>_metadata.json
- counts.csv
- average_counts.csv
- interval_results.csv
- session_summary.csv
- interactions.csv
exit: All detection threads safely terminated.
Default example model trained on 7 classes:
M(Male Passer domesticus)F(Female Passer domesticus)FeederMain_PerchWooden_PerchSky_PerchNesting_Box
The model was trained using this pipeline & has been used for primary testing purposes.
The purpose of this model in particular is use for tracking & logging basic behavioral attributes of captive Passer domesticus subjects influenced by various intestinal microbial communities over an individual's development.
To be clear, the model is still in development & included for users to demonstrate a custom model trained through the pipeline.
Installation
1. Install MiniConda or Conda:
https://www.anaconda.com/docs/getting-started/miniconda/main
https://www.anaconda.com/download
2. Create & activate environment using:
conda create -n AVIAN python=3.10
conda activate AVIAN
3. Install the package:
pip install avian-cv
Prerequisites
- Must use
Python 3.10or older. - Keep in mind, training & detection require entirely separate system requirements.
- A computer with a relatively powerful CPU or has a GPU with
CUDA enabledis required.
Execution
Initiate Training
- Transfer-learning by default:
avian train
Option to specify weights from either OBB or standard YOLO model:
avian train model=(yolo11n, yolo11l-obb, yolov8m, etc.)
This will default to using YOLO11n.pt if not specified.
Option to name the model:
avian train name="my awesome run!!"
Option to specify dataset within data folder.
avian train data="my awesome dataset!!
This will default to the most recent dataset within the /data folder.
- Train a model only from custom dataset:
Option to specify weights from either OBB or standard YOLO model.
avian train architecture=(yolo11, yolo12, yolov8-obb, etc.)
This will default to YOLO11.yaml if not specified.
- Designed to allow users to debug training operation:
avian train test
- Process Label-Studio export folders:
avian train labelstudio="my awesome export!!"
NOTE: Many of these commands can be set together, so here are a few examples:
avian train labelstudio=geckos model=yolo11m architecture=customgeckomodel
avian train data=geckos model=yolo12 test
Initiate Detection
- Defaults to mostly recently trained model & initiates usb0:
avian detect
- Initiate multiple sources in parallel:
avian detect usb0 usb1 "video1.type" "video2.type"
- Run inference using an official YOLO model or custom model:
avian detect model=(yolo11, yolo12, yolov8-obb, etc.)
- Designed to allow users to route to debug model:
avian detect test
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Filter files by name, interpreter, ABI, and platform.
If you're not sure about the file name format, learn more about wheel file names.
Copy a direct link to the current filters
File details
Details for the file avian_src-1.0.0.tar.gz.
File metadata
- Download URL: avian_src-1.0.0.tar.gz
- Upload date:
- Size: 28.4 MB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
ae360b9071efd80621c87a54c60bc652b05c302b52f646b54dc5131a622ae410
|
|
| MD5 |
b03dea3440fddd27ae135f37dab6e2e6
|
|
| BLAKE2b-256 |
636b673f0351c60ebb0771c42b22a4ef7e3e12eeb5111738cba67d5348392e32
|
File details
Details for the file avian_src-1.0.0-py3-none-any.whl.
File metadata
- Download URL: avian_src-1.0.0-py3-none-any.whl
- Upload date:
- Size: 28.4 MB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/6.2.0 CPython/3.10.19
File hashes
| Algorithm | Hash digest | |
|---|---|---|
| SHA256 |
742f95e3eba5467fbc9610e54484acc479fb07f73300a4d1778f39b818479d23
|
|
| MD5 |
d1aba2e82ff8b69094ded954ba506ea7
|
|
| BLAKE2b-256 |
687f7f24150d15136b40328add9c6f31e8ba804b2af7a251ab3b84ad6feeca97
|