From d9b744298eb5cddce6a9ed3388070f1e578d82ac Mon Sep 17 00:00:00 2001
From: Alexey <AlexeyAB@users.noreply.github.com>
Date: Sun, 01 Apr 2018 11:06:23 +0000
Subject: [PATCH] Update Readme.md
---
README.md | 15 ++++++++++-----
1 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/README.md b/README.md
index 0830e12..44680ab 100644
--- a/README.md
+++ b/README.md
@@ -219,11 +219,12 @@
1. Train it first on 1 GPU for like 1000 iterations: `darknet.exe detector train data/voc.data cfg/yolov3-voc.cfg darknet53.conv.74`
-2. Then stop and by using partially-trained model `/backup/yolo-voc_1000.weights` run training with multigpu (up to 4 GPUs): `darknet.exe detector train data/voc.data cfg/yolov3-voc.cfg /backup/yolo-voc_1000.weights -gpus 0,1,2,3`
+2. Then stop and by using partially-trained model `/backup/yolov3-voc_1000.weights` run training with multigpu (up to 4 GPUs): `darknet.exe detector train data/voc.data cfg/yolov3-voc.cfg /backup/yolov3-voc_1000.weights -gpus 0,1,2,3`
https://groups.google.com/d/msg/darknet/NbJqonJBTSY/Te5PfIpuCAAJ
## How to train (to detect your custom objects):
+Training Yolo v3
1. Create file `yolo-obj.cfg` with the same content as in `yolov3.cfg` (or copy `yolov3.cfg` to `yolo-obj.cfg)` and:
@@ -238,7 +239,7 @@
* https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L689
* https://github.com/AlexeyAB/darknet/blob/0039fd26786ab5f71d5af725fc18b3f521e7acfd/cfg/yolov3.cfg#L776
- So if `classes=1` then should be `filters=18`. If `classes=2` then write `filters=31`.
+ So if `classes=1` then should be `filters=18`. If `classes=2` then write `filters=21`.
**(Do not write in the cfg-file: filters=(classes + 5)x3)**
@@ -268,15 +269,17 @@
4. Put image-files (.jpg) of your objects in the directory `build\darknet\x64\data\obj\`
-5. Create `.txt`-file for each `.jpg`-image-file - in the same directory and with the same name, but with `.txt`-extension, and put to file: object number and object coordinates on this image, for each object in new line: `<object-class> <x> <y> <width> <height>`
+5. You should label each object on images from your dataset. Use this visual GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2 & v3: https://github.com/AlexeyAB/Yolo_mark
+
+It will create `.txt`-file for each `.jpg`-image-file - in the same directory and with the same name, but with `.txt`-extension, and put to file: object number and object coordinates on this image, for each object in new line: `<object-class> <x> <y> <width> <height>`
Where:
* `<object-class>` - integer number of object from `0` to `(classes-1)`
- * `<x> <y> <width> <height>` - float values relative to width and height of image, it can be equal from 0.0 to 1.0
+ * `<x> <y> <width> <height>` - float values relative to width and height of image, it can be equal from (0.0 to 1.0]
* for example: `<x> = <absolute_x> / <image_width>` or `<height> = <absolute_height> / <image_height>`
* atention: `<x> <y>` - are center of rectangle (are not top-left corner)
- For example for `img1.jpg` you should create `img1.txt` containing:
+ For example for `img1.jpg` you will be created `img1.txt` containing:
```
1 0.716797 0.395833 0.216406 0.147222
@@ -305,6 +308,8 @@
* Also you can get result earlier than all 45000 iterations.
+ **Note:** If during training you see `nan` values in some lines then training goes well, but if `nan` are in all lines then training goes wrong.
+
### How to train tiny-yolo (to detect your custom objects):
Do all the same steps as for the full yolo model as described above. With the exception of:
--
Gitblit v1.10.0