| | |
| | | * `DEBUG=1` to bould debug version of Yolo |
| | | * `OPENMP=1` to build with OpenMP support to accelerate Yolo by using multi-core CPU |
| | | * `LIBSO=1` to build a library `darknet.so` and binary runable file `uselib` that uses this library. Or you can try to run so `LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib test.mp4` How to use this SO-library from your own code - you can look at C++ example: https://github.com/AlexeyAB/darknet/blob/master/src/yolo_console_dll.cpp |
| | | |
| | | or use in such a way: `LD_LIBRARY_PATH=./:$LD_LIBRARY_PATH ./uselib data/coco.names cfg/yolov3.cfg yolov3.weights test.mp4` |
| | | |
| | | ### How to compile on Windows: |
| | | |
| | |
| | | |
| | | **Note:** If during training you see `nan` values for `avg` (loss) field - then training goes wrong, but if `nan` is in some other lines - then training goes well. |
| | | |
| | | **Note:** If you changed width= or height= in your cfg-file, then new width and height must be divisible by 32. |
| | | |
| | | **Note:** After training use such command for detection: `darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights` |
| | | |
| | | **Note:** if error `Out of memory` occurs then in `.cfg`-file you should increase `subdivisions=16`, 32 or 64: [link](https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L4) |
| | | |
| | | ### How to train tiny-yolo (to detect your custom objects): |
| | | |
| | | Do all the same steps as for the full yolo model as described above. With the exception of: |
| | |
| | | |
| | | * check that each object are mandatory labeled in your dataset - no one object in your data set should not be without label. In the most training issues - there are wrong labels in your dataset (got labels by using some conversion script, marked with a third-party tool, ...). Always check your dataset by using: https://github.com/AlexeyAB/Yolo_mark |
| | | |
| | | * desirable that your training dataset include images with objects at diffrent: scales, rotations, lightings, from different sides, on different backgrounds - you should preferably have 2000 images for each class or more |
| | | * desirable that your training dataset include images with objects at diffrent: scales, rotations, lightings, from different sides, on different backgrounds - you should preferably have 2000 different images for each class or more, and you should train `2000*classes` iterations or more |
| | | |
| | | * desirable that your training dataset include images with non-labeled objects that you do not want to detect - negative samples without bounded box (empty `.txt` files) |
| | | * desirable that your training dataset include images with non-labeled objects that you do not want to detect - negative samples without bounded box (empty `.txt` files) - use as many images of negative samples as there are images with objects |
| | | |
| | | * for training with a large number of objects in each image, add the parameter `max=200` or higher value in the last layer [region] in your cfg-file |
| | | |
| | | * for training for small objects - set `layers = -1, 11` instead of https://github.com/AlexeyAB/darknet/blob/6390a5a2ab61a0bdf6f1a9a6b4a739c16b36e0d7/cfg/yolov3.cfg#L720 |
| | | and set `stride=4` instead of https://github.com/AlexeyAB/darknet/blob/6390a5a2ab61a0bdf6f1a9a6b4a739c16b36e0d7/cfg/yolov3.cfg#L717 |
| | | |
| | | * General rule - your training dataset should include such a set of relative sizes of objects that you want to detect - differing by no more than 2 times: |
| | | * General rule - your training dataset should include such a set of relative sizes of objects that you want to detect: |
| | | |
| | | * `train_network_width * train_obj_width / train_image_width ~= detection_network_width * detection_obj_width / detection_image_width` |
| | | * `train_network_height * train_obj_height / train_image_height ~= detection_network_height * detection_obj_height / detection_image_height` |
| | | |
| | | * to speedup training (with decreasing detection accuracy) do Fine-Tuning instead of Transfer-Learning, set param `stopbackward=1` in one of the penultimate convolutional layers before the 1-st `[yolo]`-layer, for example here: https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L598 |
| | | * to speedup training (with decreasing detection accuracy) do Fine-Tuning instead of Transfer-Learning, set param `stopbackward=1` here: https://github.com/AlexeyAB/darknet/blob/6d44529cf93211c319813c90e0c1adb34426abe5/cfg/yolov3.cfg#L548 |
| | | |
| | | 2. After training - for detection: |
| | | |