Skip to main content

Tools for classifying camera trap images

Project description

animl-py 3.2.0

AniML comprises a variety of machine learning tools for analyzing ecological data. This Python package includes a set of functions to classify subjects within camera trap field data and can handle both images and videos. This package is also available in R: animl

Table of Contents

  1. Installation
  2. Usage
  3. Models

Installation Instructions

It is recommended that you set up a conda environment for using animl. See Dependencies below for more detail. You will have to activate the conda environment first each time you want to run AniML from a new terminal.

From GitHub

git clone https://github.com/conservationtechlab/animl-py.git
cd animl-py
pip install -e .

From PyPi

pip install animl

Dependencies

We recommend running AniML on GPU-enabled hardware. **If using an NVIDIA GPU, ensure driviers, cuda-toolkit and cudnn are installed.

Python >= 3.12

PyTorch
Animl currently depends on torch >= 2.6.0. To enable GPU, install the CUDA-enabled version

Python Package Dependencies

  • dill>=0.4.0
  • numpy>=2.0.2
  • onnxruntime-gpu>=1.19.2
  • pandas>=2.2.2
  • pillow>=11.0.0
  • opencv-python>=4.12.0.88
  • scikit-learn>=1.5.2
  • timm>=1.0.9
  • torch>=2.6.0
  • torchvision>=0.21.0
  • tqdm>=4.66.5
  • ultralytics>=8.3.95
  • wget>=3.2

Verify Install

We recommend you download the examples folder within this repository. Download and unarchive the zip folder. Then with the conda environment active:

python -m animl /path/to/example/folder

This should create an Animl-Directory subfolder within the example folder.

Or, if using your own data/models, animl can be given the paths to those files: Download and unarchive the zip folder. Then with the conda environment active:

python -m animl /example/folder --detector /path/to/megadetector --classifier /path/to/classifier --classlist /path/to/classlist.txt

You can use animl in this fashion on any image directory.

Finally you can use the animl.yml config file to specify parameters:

python -m animl /path/to/animl.yml

Usage

Inference

The functionality of animl can be parcelated into its individual functions to suit your data and scripting needs. The sandbox.ipynb notebook has all of these steps available for further exploration.

  1. It is recommended that you use the animl working directory for storing intermediate steps.
import animl
workingdir = animl.WorkingDirectory('/path/to/save/data')
  1. Build the file manifest of your given directory. This will find both images and videos.
files = animl.build_file_manifest('/path/to/images', out_file=workingdir.filemanifest, exif=True)
  1. If there are videos, extract individual frames for processing. Select either the number of frames or fps using the argumments. The other option can be set to None or removed.
allframes = animl.extract_frames(files, frames=3, out_file=workingdir.imageframes, parallel=True)
  1. Pass all images into MegaDetector. We recommend MDv5a. The function parse_MD will convert the json to a pandas DataFrame and merge detections with the original file manifest, if provided.
detector = animl.load_detector('/path/to/mdmodel.pt', model_type="mdv5", device='cuda:0')
mdresults = animl.detect(detector, allframes, resize_width=animl.MEGADETECTORv5_SIZE, resize_height=animl.MEGADETECTORv5_SIZE, 
                         letterbox=True, file_col="frame", device='cuda:0', checkpoint_path=working_dir.mdraw, quiet=True)
detections = animl.parse_detections(mdresults, manifest=allframes, out_file=workingdir.detections)
  1. For speed and efficiency, extract the empty/human/vehicle detections before classification.
animals = animl.get_animals(detections)
empty = animl.get_empty(detections)
  1. Classify using the appropriate species model. Merge the output with the rest of the detections if desired.
classifier, class_list = animl.load_classifier('/path/to/model', '/path/to/classlist.txt', device='cuda:0')
raw_predictions = animl.classify(classifier, animals, resize_width=480, resize_height=480, 
                                 file_col="filepath", batch_size=4, out_file=working_dir.predictions)
  1. Apply labels from class list with or without utilizing timestamp-based sequences.
manifest = animl.single_classification(animals, empty, raw_predictions, class_list['class'])

or, after defining a station column,

manifest = animl.sequence_classification(animals,
                                         empty, 
                                         raw_predictions,
                                         class_list['class'],
                                         station_col='station',
                                         empty_class="",
                                         sort_columns=None,
                                         file_col="filepath",
                                         maxdiff=60)
  1. (OPTIONAL) Save the Pandas DataFrame's required columns to csv and then use it to create json for TimeLapse compatibility
csv_loc = animl.export_timelapse(manifest, imagedir, only_animal = True)
animl.export_megadetector(manifest, out_file ="final_result.json", detector = 'MegaDetector v5a')
  1. (OPTIONAL) Create symlinks within a given directory for file browser access.
manifest = animl.export_folders(manifest, out_dir=working_dir.linkdir, out_file=working_dir.results)

Training

Training workflows are still under development. Please submit Issues as you come upon them.

  1. Assuming a file manifest of training data with species labels, first split the data into training, validation and test splits. This function splits each label proportionally by the given percentages, by default 0.7 training, 0.2 validation, 0.1 Test.
train, val, test, stats = animl.train_val_test(manifest, out_dir='path/to/save/data/', label_col="species",
                                               val_size: float = 0.2, test_size: float = 0.1, random_state: int = 42)
  1. Set up training configuration file. Specify the paths to the data splits from the previous step. See config README

  2. (Optional) Update train.py to include MLOPS connection.

  3. Using the config file, begin training

python -m animl.train --config /path/to/config.yaml

Every 10 epochs (or define custom 'checkpoint_frequency'), the model will be checkpointed to the 'experiment_folder' parameter in the config file, and will contain performance metrics for selection.

  1. Testing of a model checkpoint can be done with the "test.py" module. Add an 'active_model' parameter to the config file that contains the path of the checkpoint to test. This will produce a confusion matrix of the test dataset as well as a csv containing predicted and ground truth labels for each image.
python -m animl.test --config /path/to/config.yaml

Models

The Conservation Technology Lab has several models available for use. You can use the download function within animl or access them here:

animl.download_model(animl.CLASSIFIER['SDZWA_Andes_v1'],  out_dir: str = 'models/')

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

animl-3.2.0.tar.gz (75.2 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

animl-3.2.0-py3-none-any.whl (82.5 kB view details)

Uploaded Python 3

File details

Details for the file animl-3.2.0.tar.gz.

File metadata

  • Download URL: animl-3.2.0.tar.gz
  • Upload date:
  • Size: 75.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for animl-3.2.0.tar.gz
Algorithm Hash digest
SHA256 017bf3d3e928743a38ca5bbee66bbcfa04a2a74b459008c1a25c8c5e0f0f7c43
MD5 bbf5756bbc973477efee262994655e6e
BLAKE2b-256 f3a6548502f64fc5a6bf1b7c6d0c60dc794a4bfe39acb2256d3785e6b1838dc6

See more details on using hashes here.

File details

Details for the file animl-3.2.0-py3-none-any.whl.

File metadata

  • Download URL: animl-3.2.0-py3-none-any.whl
  • Upload date:
  • Size: 82.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/6.2.0 CPython/3.12.12

File hashes

Hashes for animl-3.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 20574b8a0bff55cfa0aa863cb18b26c4a7dd28b723474f61bc15b64e4b9473dc
MD5 5f280708ca6661079601864f90e4c3d3
BLAKE2b-256 04a0b52470451b9e761d6d3d8a3c902afa4257f6b5d7c7876c10924ea8481125

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page