| | |
| | | |
| | | |
| | | |
| | | |  |  https://pjreddie.com/media/files/papers/YOLOv3.pdf | |
| | | |  |  mAP (AP50) https://pjreddie.com/media/files/papers/YOLOv3.pdf | |
| | | |---|---| |
| | | |
| | | * Yolo v3 source chart for the RetinaNet on MS COCO got from Table 1 (e): https://arxiv.org/pdf/1708.02002.pdf |
| | | * Yolo v2 on Pascal VOC 2007: https://hsto.org/files/a24/21e/068/a2421e0689fb43f08584de9d44c2215f.jpg |
| | | * Yolo v2 on Pascal VOC 2012 (comp4): https://hsto.org/files/3a6/fdf/b53/3a6fdfb533f34cee9b52bdd9bb0b19d9.jpg |
| | | |
| | |
| | | |
| | | 1. Train it first on 1 GPU for like 1000 iterations: `darknet.exe detector train data/voc.data cfg/yolov3-voc.cfg darknet53.conv.74` |
| | | |
| | | 2. Then stop and by using partially-trained model `/backup/yolo-voc_1000.weights` run training with multigpu (up to 4 GPUs): `darknet.exe detector train data/voc.data cfg/yolov3-voc.cfg /backup/yolo-voc_1000.weights -gpus 0,1,2,3` |
| | | 2. Then stop and by using partially-trained model `/backup/yolov3-voc_1000.weights` run training with multigpu (up to 4 GPUs): `darknet.exe detector train data/voc.data cfg/yolov3-voc.cfg /backup/yolov3-voc_1000.weights -gpus 0,1,2,3` |
| | | |
| | | https://groups.google.com/d/msg/darknet/NbJqonJBTSY/Te5PfIpuCAAJ |
| | | |
| | | ## How to train (to detect your custom objects): |
| | | Training Yolo v3 |
| | | |
| | | 1. Create file `yolo-obj.cfg` with the same content as in `yolov3.cfg` (or copy `yolov3.cfg` to `yolo-obj.cfg)` and: |
| | | |
| | |
| | | * https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L689 |
| | | * https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L776 |
| | | |
| | | So if `classes=1` then should be `filters=18`. If `classes=2` then write `filters=31`. |
| | | So if `classes=1` then should be `filters=18`. If `classes=2` then write `filters=21`. |
| | | |
| | | **(Do not write in the cfg-file: filters=(classes + 5)x3)** |
| | | |
| | |
| | | |
| | | 4. Put image-files (.jpg) of your objects in the directory `build\darknet\x64\data\obj\` |
| | | |
| | | 5. Create `.txt`-file for each `.jpg`-image-file - in the same directory and with the same name, but with `.txt`-extension, and put to file: object number and object coordinates on this image, for each object in new line: `<object-class> <x> <y> <width> <height>` |
| | | 5. You should label each object on images from your dataset. Use this visual GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2 & v3: https://github.com/AlexeyAB/Yolo_mark |
| | | |
| | | It will create `.txt`-file for each `.jpg`-image-file - in the same directory and with the same name, but with `.txt`-extension, and put to file: object number and object coordinates on this image, for each object in new line: `<object-class> <x> <y> <width> <height>` |
| | | |
| | | Where: |
| | | * `<object-class>` - integer number of object from `0` to `(classes-1)` |
| | | * `<x> <y> <width> <height>` - float values relative to width and height of image, it can be equal from 0.0 to 1.0 |
| | | * `<x> <y> <width> <height>` - float values relative to width and height of image, it can be equal from (0.0 to 1.0] |
| | | * for example: `<x> = <absolute_x> / <image_width>` or `<height> = <absolute_height> / <image_height>` |
| | | * atention: `<x> <y>` - are center of rectangle (are not top-left corner) |
| | | |
| | | For example for `img1.jpg` you should create `img1.txt` containing: |
| | | For example for `img1.jpg` you will be created `img1.txt` containing: |
| | | |
| | | ``` |
| | | 1 0.716797 0.395833 0.216406 0.147222 |
| | |
| | | |
| | | * Also you can get result earlier than all 45000 iterations. |
| | | |
| | | **Note:** If during training you see `nan` values in some lines then training goes well, but if `nan` are in all lines then training goes wrong. |
| | | |
| | | ### How to train tiny-yolo (to detect your custom objects): |
| | | |
| | | Do all the same steps as for the full yolo model as described above. With the exception of: |
| | | * Download default weights file for tiny-yolo-voc: http://pjreddie.com/media/files/tiny-yolo-voc.weights |
| | | * Get pre-trained weights tiny-yolo-voc.conv.13 using command: `darknet.exe partial cfg/tiny-yolo-voc.cfg tiny-yolo-voc.weights tiny-yolo-voc.conv.13 13` |
| | | * Make your custom model `tiny-yolo-obj.cfg` based on `tiny-yolo-voc.cfg` instead of `yolo-voc.2.0.cfg` |
| | | * Start training: `darknet.exe detector train data/obj.data tiny-yolo-obj.cfg tiny-yolo-voc.conv.13` |
| | | * Download default weights file for yolov2-tiny-voc: http://pjreddie.com/media/files/yolov2-tiny-voc.weights |
| | | * Get pre-trained weights yolov2-tiny-voc.conv.13 using command: `darknet.exe partial cfg/yolov2-tiny-voc.cfg yolov2-tiny-voc.weights yolov2-tiny-voc.conv.13 13` |
| | | * Make your custom model `yolov2-tiny-obj.cfg` based on `cfg/yolov2-tiny-voc.cfg` instead of `yolov3.cfg` |
| | | * Start training: `darknet.exe detector train data/obj.data yolov2-tiny-obj.cfg yolov2-tiny-voc.conv.13` |
| | | |
| | | For training Yolo based on other models ([DenseNet201-Yolo](https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/densenet201_yolo.cfg) or [ResNet50-Yolo](https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/resnet50_yolo.cfg)), you can download and get pre-trained weights as showed in this file: https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/partial.cmd |
| | | If you made you custom model that isn't based on other models, then you can train it without pre-trained weights, then will be used random initial weights. |
| | |
| | | |
| | | * **mAP** (mean average precision) - mean value of `average precisions` for each class, where `average precision` is average value of 11 points on PR-curve for each possible threshold (each probability of detection) for the same class (Precision-Recall in terms of PascalVOC, where Precision=TP/(TP+FP) and Recall=TP/(TP+FN) ), page-11: http://homepages.inf.ed.ac.uk/ckiw/postscript/ijcv_voc09.pdf |
| | | |
| | | **mAP** is default metric of precision in the PascalVOC competition, **this is the same as AP50** metric in the MS COCO competition. |
| | | In terms of Wiki, indicators Precision and Recall have a slightly different meaning than in the PascalVOC competition, but **IoU always has the same meaning**. |
| | | |
| | |  |
| | |
| | | `darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -heigh 416` |
| | | then set the same 9 `anchors` in each of 3 `[yolo]`-layers in your cfg-file |
| | | |
| | | * desirable that your training dataset include images with objects at diffrent: scales, rotations, lightings, from different sides |
| | | * desirable that your training dataset include images with objects at diffrent: scales, rotations, lightings, from different sides, on different backgrounds |
| | | |
| | | * desirable that your training dataset include images with objects (without labels) that you do not want to detect - negative samples |
| | | * desirable that your training dataset include images with non-labeled objects that you do not want to detect - negative samples without bounded box |
| | | |
| | | * for training with a large number of objects in each image, add the parameter `max=200` or higher value in the last layer [region] in your cfg-file |
| | | |