Skip to main content

Recognizing burial mounds with YOLO

Project description

burial-mounds-object-recognition

Finetuning Object recognition models to recognize burial mounds.

Usage

This repo was built around using YOLOv8 for detecting objects in satellite images. I chose to organize it as a CLI so that it is easy to get started in completely fresh environments.

Installation

To just use the CLI for preprocessing and finetuning, you can install the package from PyPI.

pip install burial_mounds

Make sure to also install OpenCV on your computer. Here's how you would do that for Debian-based systems (which I have used):

sudo apt update && sudo apt install python3-opencv

Preprocessing

xView

The CLI has code for preprocessing the data in the xView dataset, which contains high quality annotated satellite imagery formulated as an object detection task.

Download the dataset, and arrange it in the following folder structure:

- data/
    - xView/
        - train_images/
            - 10.tiff
            ...
        xView_train.geojson

Then run the command:

python3 -m burial_mounds preprocess_xview --data_dir data/xView

This will convert all labels in the geoJSON file to YOLO format and output a config file for YOLO training under configs/xview.yaml

Burial Mounds

To preprocess the burial mounds dataset, you can also utilise the CLI.

The preprocessing pipeline will split the large geoTIFF files into smaller images with annotations. The script can either prepare data for simple object detection or for OBB training.

The preprocessing consists of the following steps:

  1. Splitting the large raster files into smaller windows. (--image_size parameter controls the size of the windows)
  2. Minmax color normalization.
  3. Producing bounding box labels (either oriented or non-oriented formats, --format parameter)

To prepare the dataset for OBB:

python3 -m burial_mounds preprocess_mounds --data_dir data/TRAP_Data --out_dir data/mounds --image_size 1024 --format obb

For simple object detection:

python3 -m burial_mounds preprocess_mounds --data_dir data/TRAP_Data --out_dir data/mounds --image_size 640 --format detect

Finetuning

There are two types of models you can choose from for finetuning. Either models that have been trained on OBB, or simple object detection.

OBB models have been pretrained on the DOTA satellite object recognition dataset and are therefore more likely to perform better on satellite images without finetuning.

Finetuning the models also comes with data augmentation built-in thereby increasing the robustness of trained models.

Detection

These are the models that you can finetune on a detection task:

Model Size (pixels) mAPval 50-95 Speed CPU ONNX (ms) Speed A100 TensorRT (ms) Params (M) FLOPs (B)
YOLOv8n 640 18.4 142.4 1.21 3.5 10.5
YOLOv8s 640 27.7 183.1 1.40 11.4 29.7
YOLOv8m 640 33.6 408.5 2.26 26.2 80.6
YOLOv8l 640 34.9 596.9 2.43 44.1 167.4
YOLOv8x 640 36.3 860.6 3.56 68.7 260.6

These have been pretrained on the Open Images V7 dataset with all sorts of objects the models have to recognize.

You can finetune an already existing model on object detection with the finetune command. If you want to go down this route I recommend that you finetune on xView first, so that the model will have seen satellite images before encountering the mound problem.

python3 -m burial_mounds finetune "yolov8n.pt" "configs/xview.yaml" --epochs 300 --image_size 640

OBB

These are the models that you can finetune on OBB detection:

Model Size (pixels) mAPtest 50 Speed CPU ONNX (ms) Speed A100 TensorRT (ms) Params (M) FLOPs (B)
YOLOv8n-obb 1024 78.0 204.77 3.57 3.1 23.3
YOLOv8s-obb 1024 79.5 424.88 4.07 11.4 76.3
YOLOv8m-obb 1024 80.5 763.48 7.61 26.4 208.6
YOLOv8l-obb 1024 80.7 1278.42 11.83 44.5 433.8
YOLOv8x-obb 1024 81.36 1759.10 13.23 69.5 676.7

These models have been pretrained on the DOTA dataset, which contains satellite imagery, and these models are therefore more likely to be better at mound recognition.

python3 -m burial_mounds finetune "yolov8n.pt" "configs/mounds.yaml" --epochs 300 --image_size 1024

To run these finetuning scripts in the background (on Ucloud for instance), I recommend that you use nohup and store the logs.

nohup python3 -m burial_mounds finetune "yolov8n.pt" "configs/mounds.yaml" --epochs 300 --image_size 1024 &> "nano_mounds_finetune.log" &

Publishing

If you intend to publish a trained model to the HuggingFace Hub you can use the push_to_hub command.

python3 -m burial_mounds push_to_hub --model_path "models/mounds_base-yolov8n_best.pt" --repo_id "chcaa/burial-mounds_yolo8n_obb"

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

burial_mounds-0.1.3.tar.gz (12.0 kB view details)

Uploaded Source

Built Distribution

If you're not sure about the file name format, learn more about wheel file names.

burial_mounds-0.1.3-py3-none-any.whl (12.8 kB view details)

Uploaded Python 3

File details

Details for the file burial_mounds-0.1.3.tar.gz.

File metadata

  • Download URL: burial_mounds-0.1.3.tar.gz
  • Upload date:
  • Size: 12.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.12.1 Linux/5.15.0-107-generic

File hashes

Hashes for burial_mounds-0.1.3.tar.gz
Algorithm Hash digest
SHA256 5467a39f162b5ab80807eb76a56321882d7fbb52343995db28566587ca08ae2d
MD5 667ca9d00a3c3e95f7c4608f6d98858a
BLAKE2b-256 e6dbf1888ffffd0ef4c8a569539e9b33d221786c8cd9f333cefd57840194c339

See more details on using hashes here.

File details

Details for the file burial_mounds-0.1.3-py3-none-any.whl.

File metadata

  • Download URL: burial_mounds-0.1.3-py3-none-any.whl
  • Upload date:
  • Size: 12.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.8.2 CPython/3.12.1 Linux/5.15.0-107-generic

File hashes

Hashes for burial_mounds-0.1.3-py3-none-any.whl
Algorithm Hash digest
SHA256 7ab00c86d7ff6a80ccc20deac579dd4137d94538c499988c585d3f6bf47a5571
MD5 c78f51b58a462b75811a1f791410c524
BLAKE2b-256 b97b0cf6eea37501aac60a3f9d039ed70d622a005bf026329150429aafa3a964

See more details on using hashes here.

Supported by

AWS Cloud computing and Security Sponsor Datadog Monitoring Depot Continuous Integration Fastly CDN Google Download Analytics Pingdom Monitoring Sentry Error logging StatusPage Status page