From a6cbaeecde40f91ddc3ea09aa26a03ab5bbf8ba8 Mon Sep 17 00:00:00 2001
From: AlexeyAB <alexeyab84@gmail.com>
Date: Wed, 15 Mar 2017 20:39:18 +0000
Subject: [PATCH] Added support DLL (dynamic link library) - yolo_cpp_dll.dll

---
 README.md |   94 ++++++++++++++++++++++++++++++++++++++++++-----
 1 files changed, 84 insertions(+), 10 deletions(-)

diff --git a/README.md b/README.md
index 7b7abdb..ac5240a 100644
--- a/README.md
+++ b/README.md
@@ -1,8 +1,20 @@
-|  ![Darknet Logo](http://pjreddie.com/media/files/darknet-black-small.png) | &nbsp; ![map_fps](https://cloud.githubusercontent.com/assets/4096485/21550284/88f81b8a-ce09-11e6-9516-8c3dd35dfaa7.jpg) https://arxiv.org/abs/1612.08242 |
+# Yolo-Windows v2
+
+1. [How to use](#how-to-use)
+2. [How to compile](#how-to-compile)
+3. [How to train (Pascal VOC Data)](#how-to-train-pascal-voc-data)
+4. [How to train (to detect your custom objects)](#how-to-train-to-detect-your-custom-objects)
+5. [When should I stop training](#when-should-i-stop-training)
+6. [How to improve object detection](#how-to-improve-object-detection)
+7. [How to mark bounded boxes of objects and create annotation files](#how-to-mark-bounded-boxes-of-objects-and-create-annotation-files)
+
+|  ![Darknet Logo](http://pjreddie.com/media/files/darknet-black-small.png) | &nbsp; ![map_fps](https://hsto.org/files/a24/21e/068/a2421e0689fb43f08584de9d44c2215f.jpg) https://arxiv.org/abs/1612.08242 |
+|---|---|
+
+|  ![Darknet Logo](http://pjreddie.com/media/files/darknet-black-small.png) | &nbsp; ![map_fps](https://hsto.org/files/978/a64/7ca/978a647caaee40b7b0a64f7770f11e99.jpg) https://arxiv.org/abs/1612.08242 |
 |---|---|
 
 
-# Yolo-Windows v2
 # "You Only Look Once: Unified, Real-Time Object Detection (version 2)"
 A yolo windows version (for object detection)
 
@@ -62,8 +74,8 @@
 1. Download for Android phone mjpeg-stream soft: IP Webcam / Smart WebCam
 
 
- Smart WebCam - preferably: https://play.google.com/store/apps/details?id=com.acontech.android.SmartWebCam
- IP Webcam: https://play.google.com/store/apps/details?id=com.pas.webcam
+    * Smart WebCam - preferably: https://play.google.com/store/apps/details?id=com.acontech.android.SmartWebCam2
+    * IP Webcam: https://play.google.com/store/apps/details?id=com.pas.webcam
 
 2. Connect your Android phone to computer by WiFi (through a WiFi-router) or USB
 3. Start Smart WebCam on your phone
@@ -76,7 +88,9 @@
 
 ### How to compile:
 
-1. If you have MSVS 2015, CUDA 8.0 and OpenCV 2.4.9 (with paths: `C:\opencv_2.4.9\opencv\build\include` & `C:\opencv_2.4.9\opencv\build\x64\vc14\lib`), then start MSVS, open `build\darknet\darknet.sln`, set **x64** and **Release**, and do the: Build -> Build darknet
+1. If you have MSVS 2015, CUDA 8.0 and OpenCV 2.4.9 (with paths: `C:\opencv_2.4.9\opencv\build\include` & `C:\opencv_2.4.9\opencv\build\x64\vc12\lib` or `vc14\lib`), then start MSVS, open `build\darknet\darknet.sln`, set **x64** and **Release**, and do the: Build -> Build darknet
+
+  1.1. Find files `opencv_core249.dll`, `opencv_highgui249.dll` and `opencv_ffmpeg249_64.dll` in `C:\opencv_2.4.9\opencv\build\x64\vc12\bin` or `vc14\bin` and put it near with `darknet.exe`
 
 2. If you have other version of CUDA (not 8.0) then open `build\darknet\darknet.vcxproj` by using Notepad, find 2 places with "CUDA 8.0" and change it to your CUDA-version, then do step 1
 
@@ -139,7 +153,12 @@
 
 1. Download pre-trained weights for the convolutional layers (76 MB): http://pjreddie.com/media/files/darknet19_448.conv.23 and put to the directory `build\darknet\x64`
 
-2. Download The Pascal VOC Data and unpack it to directory `build\darknet\x64\data\voc`: http://pjreddie.com/projects/pascal-voc-dataset-mirror/ will be created file `voc_label.py` and `\VOCdevkit\` dir
+2. Download The Pascal VOC Data and unpack it to directory `build\darknet\x64\data\voc` will be created dir `build\darknet\x64\data\voc\VOCdevkit\`:
+    * http://pjreddie.com/media/files/VOCtrainval_11-May-2012.tar
+    * http://pjreddie.com/media/files/VOCtrainval_06-Nov-2007.tar
+    * http://pjreddie.com/media/files/VOCtest_06-Nov-2007.tar
+    
+    2.1 Download file `voc_label.py` to dir `build\darknet\x64\data\voc`: http://pjreddie.com/media/files/voc_label.py
 
 3. Download and install Python for Windows: https://www.python.org/ftp/python/3.5.2/python-3.5.2-amd64.exe
 
@@ -166,7 +185,7 @@
 1. Create file `yolo-obj.cfg` with the same content as in `yolo-voc.cfg` (or copy `yolo-voc.cfg` to `yolo-obj.cfg)` and:
 
   * change line `classes=20` to your number of objects
-  * change line `filters=425` to `filters=(classes + 5)*5` (generally this depends on the `num` and `coords`, i.e. equal to `(classes + coords + 1)*num`)
+  * change line #224 from [`filters=125`](https://github.com/AlexeyAB/darknet/blob/master/cfg/yolo-voc.cfg#L224) to `filters=(classes + 5)*5` (generally this depends on the `num` and `coords`, i.e. equal to `(classes + coords + 1)*num`)
 
   For example, for 2 objects, your file `yolo-obj.cfg` should differ from `yolo-voc.cfg` in such lines:
 
@@ -192,7 +211,7 @@
 
 4. Put image-files (.jpg) of your objects in the directory `build\darknet\x64\data\obj\`
 
-5. Create `.txt`-file for each `.jpg`-image-file - with the same name, but with `.txt`-extension, and put to file: object number and object coordinates on this image, for each object in new line: `<object-class> <x> <y> <width> <height>`
+5. Create `.txt`-file for each `.jpg`-image-file - in the same directory and with the same name, but with `.txt`-extension, and put to file: object number and object coordinates on this image, for each object in new line: `<object-class> <x> <y> <width> <height>`
 
   Where: 
   * `<object-class>` - integer number of object from `0` to `(classes-1)`
@@ -224,15 +243,70 @@
 
  * After each 1000 iterations you can stop and later start training from this point. For example, after 2000 iterations you can stop training, and later just copy `yolo-obj_2000.weights` from `build\darknet\x64\backup\` to `build\darknet\x64\` and start training using: `darknet.exe detector train data/obj.data yolo-obj.cfg yolo-obj_2000.weights`
 
- * Also you can get result earlier than all 45000 iterations, for example, usually sufficient 2000 iterations for each class(object). I.e. for 6 classes to avoid overfitting - you can stop training after 12000 iterations and use `yolo-obj_12000.weights` to detection.
+ * Also you can get result earlier than all 45000 iterations.
  
+## When should I stop training:
+
+Usually sufficient 2000 iterations for each class(object). But for a more precise definition when you should stop training, use the following manual:
+
+1. During training, you will see varying indicators of error, and you should stop when no longer decreases **0.060730 avg**:
+
+  > Region Avg IOU: 0.798363, Class: 0.893232, Obj: 0.700808, No Obj: 0.004567, Avg Recall: 1.000000,  count: 8
+  > Region Avg IOU: 0.800677, Class: 0.892181, Obj: 0.701590, No Obj: 0.004574, Avg Recall: 1.000000,  count: 8
+  >
+  > **9002**: 0.211667, **0.060730 avg**, 0.001000 rate, 3.868000 seconds, 576128 images
+  > Loaded: 0.000000 seconds
+
+  * **9002** - iteration number (number of batch)
+  * **0.060730 avg** - average loss (error) - **the lower, the better**
+
+  When you see that average loss **0.060730 avg** enough low at many iterations and no longer decreases then you should stop training.
+
+2. Once training is stopped, you should take some of last `.weights`-files from `darknet\build\darknet\x64\backup` and choose the best of them:
+
+For example, you stopped training after 9000 iterations, but the best result can give one of previous weights (7000, 8000, 9000). It can happen due to overfitting. **Overfitting** - is case when you can detect objects on images from training-dataset, but can't detect ojbects on any others images. You should get weights from **Early Stopping Point**:
+
+![Overfitting](https://hsto.org/files/5dc/7ae/7fa/5dc7ae7fad9d4e3eb3a484c58bfc1ff5.png) 
+
+  2.1. At first, you should put filenames of validation images to file `data\voc.2007.test` (format as in `train.txt`) or if you haven't validation images - simply copy `data\train.txt` to `data\voc.2007.test`.
+
+  2.2 If training is stopped after 9000 iterations, to validate some of previous weights use this commands:
+
+* `darknet.exe detector recall data/obj.data yolo-obj.cfg backup\yolo-obj_7000.weights`
+* `darknet.exe detector recall data/obj.data yolo-obj.cfg backup\yolo-obj_8000.weights`
+* `darknet.exe detector recall data/obj.data yolo-obj.cfg backup\yolo-obj_9000.weights`
+
+And comapre last output lines for each weights (7000, 8000, 9000):
+
+> 7586 7612 7689 RPs/Img: 68.23 **IOU: 77.86%** Recall:99.00%
+
+* **IOU** - the bigger, the better (says about accuracy) - **better to use**
+* **Recall** - the bigger, the better (says about accuracy)
+
+For example, **bigger IUO** gives weights `yolo-obj_8000.weights` - then **use this weights for detection**.
+
+
+![precision_recall_iou](https://hsto.org/files/ca8/866/d76/ca8866d76fb840228940dbf442a7f06a.jpg)
+
 ### Custom object detection:
 
-Example of custom object detection: `darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_3000.weights`
+Example of custom object detection: `darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights`
 
 | ![Yolo_v2_training](https://hsto.org/files/d12/1e7/515/d121e7515f6a4eb694913f10de5f2b61.jpg) | ![Yolo_v2_training](https://hsto.org/files/727/c7e/5e9/727c7e5e99bf4d4aa34027bb6a5e4bab.jpg) |
 |---|---|
 
+## How to improve object detection:
+
+1. Before training:
+  * set flag `random=1` in your `.cfg`-file - it will increase precision by training Yolo for different resolutions: [link](https://github.com/AlexeyAB/darknet/blob/47409529d0eb935fa7bafbe2b3484431117269f5/cfg/yolo-voc.cfg#L244)
+
+2. After training - for detection:
+
+  * Increase network-resolution by set in your `.cfg`-file (`height=608` and `width=608`) or (`height=832` and `width=832`) or (any value multiple of 32) - this increases the precision and makes it possible to detect small objects: [link](https://github.com/AlexeyAB/darknet/blob/47409529d0eb935fa7bafbe2b3484431117269f5/cfg/yolo-voc.cfg#L4)
+  
+    * you do not need to train the network again, just use `.weights`-file already trained for 416x416 resolution
+    * if error `Out of memory` occurs then in `.cfg`-file you should increase `subdivisions=16`, 32 or 64: [link](https://github.com/AlexeyAB/darknet/blob/47409529d0eb935fa7bafbe2b3484431117269f5/cfg/yolo-voc.cfg#L3)
+
 ## How to mark bounded boxes of objects and create annotation files:
 
 Here you can find repository with GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2: https://github.com/AlexeyAB/Yolo_mark

--
Gitblit v1.10.0