From 35b9c3d0e761afa4ae2d900c818e399c0d4e73d4 Mon Sep 17 00:00:00 2001
From: Alexey <AlexeyAB@users.noreply.github.com>
Date: Thu, 07 Jun 2018 11:15:10 +0000
Subject: [PATCH] Update Readme.md
---
README.md | 16 ++++++++++++++--
1 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/README.md b/README.md
index b56299e..541f799 100644
--- a/README.md
+++ b/README.md
@@ -159,6 +159,8 @@
5. If you have GPU with Tensor Cores (nVidia Titan V / Tesla V100 / DGX-2 and later) speedup Detection 3x, Training 2x:
`\darknet.sln` -> (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, and add here: `CUDNN_HALF;`
+
+ **Note:** CUDA must be installed only after that MSVS2015 had been installed.
### How to compile (custom):
@@ -218,7 +220,7 @@
More information about training by the link: http://pjreddie.com/darknet/yolo/#train-voc
- **Note:** If during training you see `nan` values in some lines then training goes well, but if `nan` are in all lines then training goes wrong.
+ **Note:** If during training you see `nan` values for `avg` (loss) field - then training goes wrong, but if `nan` is in some other lines - then training goes well.
## How to train with multi-GPU:
@@ -317,7 +319,7 @@
* Also you can get result earlier than all 45000 iterations.
- **Note:** If during training you see `nan` values in some lines then training goes well, but if `nan` are in all lines then training goes wrong.
+ **Note:** If during training you see `nan` values for `avg` (loss) field - then training goes wrong, but if `nan` is in some other lines - then training goes well.
### How to train tiny-yolo (to detect your custom objects):
@@ -415,12 +417,22 @@
`darknet.exe detector calc_anchors data/obj.data -num_of_clusters 9 -width 416 -height 416`
then set the same 9 `anchors` in each of 3 `[yolo]`-layers in your cfg-file
+ * check that each object are mandatory labeled in your dataset - no one object in your data set should not be without label. In the most training issues - there are wrong labels in your dataset (got labels by using some conversion script, marked with a third-party tool, ...). Always check your dataset by using: https://github.com/AlexeyAB/Yolo_mark
+
* desirable that your training dataset include images with objects at diffrent: scales, rotations, lightings, from different sides, on different backgrounds
* desirable that your training dataset include images with non-labeled objects that you do not want to detect - negative samples without bounded box (empty `.txt` files)
* for training with a large number of objects in each image, add the parameter `max=200` or higher value in the last layer [region] in your cfg-file
+ * for training for small objects - set `layers = -1, 11` instead of https://github.com/AlexeyAB/darknet/blob/6390a5a2ab61a0bdf6f1a9a6b4a739c16b36e0d7/cfg/yolov3.cfg#L720
+ and set `stride=4` instead of https://github.com/AlexeyAB/darknet/blob/6390a5a2ab61a0bdf6f1a9a6b4a739c16b36e0d7/cfg/yolov3.cfg#L717
+
+ * General rule - you should keep relative size of objects in the Training and Testing datasets roughly the same:
+
+ * `train_network_width * train_obj_width / train_image_width ~= detection_network_width * detection_obj_width / detection_image_width`
+ * `train_network_height * train_obj_height / train_image_height ~= detection_network_height * detection_obj_height / detection_image_height`
+
* to speedup training (with decreasing detection accuracy) do Fine-Tuning instead of Transfer-Learning, set param `stopbackward=1` in one of the penultimate convolutional layers before the 1-st `[yolo]`-layer, for example here: https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L598
2. After training - for detection:
--
Gitblit v1.10.0