Tools for classifying camera trap images
Project description
animl-py
AniML comprises a variety of machine learning tools for analyzing ecological data. This Python package includes a set of functions to classify subjects within camera trap field data and can handle both images and videos. This package is also available in R: animl
Table of Contents
- Installation
- Usage
Installation Instructions
It is recommended that you set up a conda environment for using animl. See Dependencies below for more detail. You will have to activate the conda environment first each time you want to run AniML from a new terminal.
From GitHub
git clone https://github.com/conservationtechlab/animl-py.git
cd animl-py
conda env create --file environment.yml
conda activate animl-gpu
pip install -e .
From PyPi
With NVIDIA GPU
conda create -n animl-gpu python=3.7
conda activate animl-gpu
conda install cudatoolkit=11.3.1 cudnn=8.2.1
pip install animl
CPU only
conda create -n animl-cpu python=3.7
conda activate animl
pip install animl
Dependencies
We recommend running AniML on GPU-enabled hardware. **If using an NVIDIA GPU, ensure driviers, cuda-toolkit and cudnn are installed. The /models/ and /utils/ modules are from the YOLOv5 repository. https://github.com/ultralytics/yolov5
Python Package Dependencies
- pandas = 1.3.5
- tensorflow = 2.6
- torch = 1.13.1
- torchvision = 0.14.1
- numpy = 1.19.5
- cudatoolkit = 11.3.1 **
- cudnn = 8.2.1 **
A full list of dependencies can be found in environment.yml
Verify Install
We recommend you download the examples folder within this repository. Download and unarchive the zip folder. Then with the conda environment active:
python3 -m animl /path/to/example/folder
This should create an Animl-Directory subfolder within the example folder.
Usage
Inference
The functionality of animl can be parcelated into its individual functions to suit your data and scripting needs. The sandbox.ipynb notebook has all of these steps available for further exploration.
- It is recommended that you use the animl working directory for storing intermediate steps.
from animl import file_management
workingdir = file_management.WorkingDirectory(imagedir)
- Build the file manifest of your given directory. This will find both images and videos.
files = file_management.build_file_manifest('/path/to/images', out_file = workingdir.filemanifest)
- If there are videos, extract individual frames for processing. Select either the number of frames or fps using the argumments. The other option can be set to None or removed.
from animl import video_processing
allframes = video_processing.images_from_videos(files, out_dir=workingdir.vidfdir,
out_file=workingdir.imageframes,
parallel=True, frames=3, fps=None)
- Pass all images into MegaDetector. We recommend MDv5a parseMD will merge detections with the original file manifest, if provided.
from animl import detectMD, megadetector, parse_results
detector = megadetector.MegaDetector('/path/to/mdmodel.pt')
mdresults = detectMD.detect_MD_batch(detector, allframes["Frame"], quiet=True)
mdres = parse_results.from_MD(mdresults, manifest=allframes, out_file = workingdir.mdresults)
- For speed and efficiency, extract the empty/human/vehicle detections before classification.
from animl import split
animals = split.getAnimals(mdres)
empty = split.getEmpty(mdres)
- Classify using the appropriate species model. Merge the output with the rest of the detections if desired.
from animl import classify, parse_results
classifier = classify.load_classifier('/path/to/classifier/')
predresults = classify.predict_species(animals, classifier, batch = 4)
animals = parse_results.from_classifier(animals, predresults, '/path/to/classlist.txt',
out_file=workingdir.predictions)
manifest = pd.concat([animals,empty])
Training
Training workflows are available in the repo but still under development.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.