Minimal PyTorch implementation of YOLO
Project description
PyTorch-YOLOv3
A minimal PyTorch implementation of YOLOv3, with support for training, inference and evaluation.
Installation
Installing from source
For normal training and evaluation we recommend installing the package from source using a poetry virtual environment.
git clone https://github.com/eriklindernoren/PyTorch-YOLOv3
cd PyTorch-YOLOv3/
pip3 install poetry --user
poetry install
You need to join the virtual environment by running poetry shell
in this directory before running any of the following commands without the poetry run
prefix.
Also have a look at the other installing method, if you want to use the commands everywhere without opening a poetry-shell.
Download pretrained weights
./weights/download_weights.sh
Download COCO
./data/get_coco_dataset.sh
Install via pip
This installation method is recommended, if you want to use this package as a dependency in another python project.
This method only includes the code, is less isolated and may conflict with other packages.
Weights and the COCO dataset need to be downloaded as stated above.
See API for further information regarding the packages API.
It also enables the CLI tools yolo-detect
, yolo-train
, and yolo-test
everywhere without any additional commands.
pip3 install pytorchyolo --user
Test
Evaluates the model on COCO test dataset. To download this dataset as well as weights, see above.
poetry run yolo-test --weights weights/yolov3.weights
Model | mAP (min. 50 IoU) |
---|---|
YOLOv3 608 (paper) | 57.9 |
YOLOv3 608 (this impl.) | 57.3 |
YOLOv3 416 (paper) | 55.3 |
YOLOv3 416 (this impl.) | 55.5 |
Inference
Uses pretrained weights to make predictions on images. Below table displays the inference times when using as inputs images scaled to 256x256. The ResNet backbone measurements are taken from the YOLOv3 paper. The Darknet-53 measurement marked shows the inference time of this implementation on my 1080ti card.
Backbone | GPU | FPS |
---|---|---|
ResNet-101 | Titan X | 53 |
ResNet-152 | Titan X | 37 |
Darknet-53 (paper) | Titan X | 76 |
Darknet-53 (this impl.) | 1080ti | 74 |
poetry run yolo-detect --images data/samples/
Train
For argument descriptions have a look at poetry run yolo-train --help
Example (COCO)
To train on COCO using a Darknet-53 backend pretrained on ImageNet run:
poetry run yolo-train --data config/coco.data --pretrained_weights weights/darknet53.conv.74
Tensorboard
Track training progress in Tensorboard:
- Initialize training
- Run the command below
- Go to http://localhost:6006/
poetry run tensorboard --logdir='logs' --port=6006
Storing the logs on a slow drive possibly leads to a significant training speed decrease.
You can adjust the log directory using --logdir <path>
when running tensorboard
and yolo-train
.
Train on Custom Dataset
Custom model
Run the commands below to create a custom model definition, replacing <num-classes>
with the number of classes in your dataset.
./config/create_custom_model.sh <num-classes> # Will create custom model 'yolov3-custom.cfg'
Classes
Add class names to data/custom/classes.names
. This file should have one row per class name.
Image Folder
Move the images of your dataset to data/custom/images/
.
Annotation Folder
Move your annotations to data/custom/labels/
. The dataloader expects that the annotation file corresponding to the image data/custom/images/train.jpg
has the path data/custom/labels/train.txt
. Each row in the annotation file should define one bounding box, using the syntax label_idx x_center y_center width height
. The coordinates should be scaled [0, 1]
, and the label_idx
should be zero-indexed and correspond to the row number of the class name in data/custom/classes.names
.
Define Train and Validation Sets
In data/custom/train.txt
and data/custom/valid.txt
, add paths to images that will be used as train and validation data respectively.
Train
To train on the custom dataset run:
poetry run yolo-train --model config/yolov3-custom.cfg --data config/custom.data
Add --pretrained_weights weights/darknet53.conv.74
to train using a backend pretrained on ImageNet.
API
You are able to import the modules of this repo in your own project if you install the pip package pytorchyolo
.
An example prediction call from a simple OpenCV python script would look like this:
import cv2
from pytorchyolo import detect, models
# Load the YOLO model
model = models.load_model(
"<PATH_TO_YOUR_CONFIG_FOLDER>/yolov3.cfg",
"<PATH_TO_YOUR_WEIGHTS_FOLDER>/yolov3.weights")
# Load the image as a numpy array
img = cv2.imread("<PATH_TO_YOUR_IMAGE>")
# Convert OpenCV bgr to rgb
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
# Runs the YOLO model on the image
boxes = detect.detect_image(model, img)
print(boxes)
# Output will be a numpy array in the following format:
# [[x1, y1, x2, y2, confidence, class]]
For more advanced usage look at the method's doc strings.
Credit
YOLOv3: An Incremental Improvement
Joseph Redmon, Ali Farhadi
Abstract
We present some updates to YOLO! We made a bunch
of little design changes to make it better. We also trained
this new network that’s pretty swell. It’s a little bigger than
last time but more accurate. It’s still fast though, don’t
worry. At 320 × 320 YOLOv3 runs in 22 ms at 28.2 mAP,
as accurate as SSD but three times faster. When we look
at the old .5 IOU mAP detection metric YOLOv3 is quite
good. It achieves 57.9 AP50 in 51 ms on a Titan X, compared
to 57.5 AP50 in 198 ms by RetinaNet, similar performance
but 3.8× faster. As always, all the code is online at
https://pjreddie.com/yolo/.
[Paper] [Project Webpage] [Authors' Implementation]
@article{yolov3,
title={YOLOv3: An Incremental Improvement},
author={Redmon, Joseph and Farhadi, Ali},
journal = {arXiv},
year={2018}
}
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for PyTorchYolo-1.6.2-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 968e343a90ede3ae8c6216587e7150a66cf8375c93203bdd5ed7d7a256fc890d |
|
MD5 | 9383e0ac8014b39b634ed9783352028c |
|
BLAKE2b-256 | a67920b207ee9b93f9eb0b5ab8c0b47c7aebc12c202c6989309d6b6a10ee9d4c |