| | |
| | | # Yolo-Windows v2 |
| | | # Yolo-v2 Windows and Linux version |
| | | |
| | | 1. [How to use](#how-to-use) |
| | | 2. [How to compile](#how-to-compile) |
| | | 3. [How to train (Pascal VOC Data)](#how-to-train-pascal-voc-data) |
| | | 4. [How to train (to detect your custom objects)](#how-to-train-to-detect-your-custom-objects) |
| | | 5. [When should I stop training](#when-should-i-stop-training) |
| | | 6. [How to improve object detection](#how-to-improve-object-detection) |
| | | 7. [How to mark bounded boxes of objects and create annotation files](#how-to-mark-bounded-boxes-of-objects-and-create-annotation-files) |
| | | 8. [How to use Yolo as DLL](#how-to-use-yolo-as-dll) |
| | | 2. [How to compile on Linux](#how-to-compile-on-linux) |
| | | 3. [How to compile on Windows](#how-to-compile-on-windows) |
| | | 4. [How to train (Pascal VOC Data)](#how-to-train-pascal-voc-data) |
| | | 5. [How to train (to detect your custom objects)](#how-to-train-to-detect-your-custom-objects) |
| | | 6. [When should I stop training](#when-should-i-stop-training) |
| | | 7. [How to improve object detection](#how-to-improve-object-detection) |
| | | 8. [How to mark bounded boxes of objects and create annotation files](#how-to-mark-bounded-boxes-of-objects-and-create-annotation-files) |
| | | 9. [How to use Yolo as DLL](#how-to-use-yolo-as-dll) |
| | | |
| | | |  |  https://arxiv.org/abs/1612.08242 | |
| | | |---|---| |
| | | |
| | | |  |  https://arxiv.org/abs/1612.08242 | |
| | | |  |  https://arxiv.org/abs/1612.08242 | |
| | | |---|---| |
| | | |
| | | |
| | | # "You Only Look Once: Unified, Real-Time Object Detection (version 2)" |
| | | A yolo windows version (for object detection) |
| | | |
| | | Contributtors: https://github.com/pjreddie/darknet/graphs/contributors |
| | | A Yolo cross-platform Windows and Linux version (for object detection). Contributtors: https://github.com/pjreddie/darknet/graphs/contributors |
| | | |
| | | This repository is forked from Linux-version: https://github.com/pjreddie/darknet |
| | | |
| | | More details: http://pjreddie.com/darknet/yolo/ |
| | | |
| | | This repository supports: |
| | | |
| | | * both Windows and Linux |
| | | * both OpenCV 3.x and OpenCV 2.4.13 |
| | | * both cuDNN 5 and cuDNN 6 |
| | | * CUDA >= 7.5 |
| | | * also create SO-library on Linux and DLL-library on Windows |
| | | |
| | | ##### Requires: |
| | | * **MS Visual Studio 2015 (v140)**: https://www.microsoft.com/download/details.aspx?id=48146 |
| | | * **CUDA 8.0 for Windows x64**: https://developer.nvidia.com/cuda-downloads |
| | | * **OpenCV 2.4.9**: https://sourceforge.net/projects/opencvlibrary/files/opencv-win/2.4.9/opencv-2.4.9.exe/download |
| | | - To compile without OpenCV - remove define OPENCV from: Visual Studio->Project->Properties->C/C++->Preprocessor |
| | | - To compile with different OpenCV version - change in file yolo.c each string look like **#pragma comment(lib, "opencv_core249.lib")** from 249 to required version. |
| | | - With OpenCV will show image or video detection in window and store result to: test_dnn_out.avi |
| | | * **Linux GCC>=4.9 or Windows MS Visual Studio 2015 (v140)**: https://go.microsoft.com/fwlink/?LinkId=532606&clcid=0x409 (or offline [ISO image](https://go.microsoft.com/fwlink/?LinkId=615448&clcid=0x409)) |
| | | * **CUDA 8.0**: https://developer.nvidia.com/cuda-downloads |
| | | * **OpenCV 3.x**: https://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.2.0/opencv-3.2.0-vc14.exe/download |
| | | * **or OpenCV 2.4.13**: https://sourceforge.net/projects/opencvlibrary/files/opencv-win/2.4.13/opencv-2.4.13.2-vc14.exe/download |
| | | - OpenCV allows to show image or video detection in the window and store result to file that specified in command line `-out_filename res.avi` |
| | | |
| | | ##### Pre-trained models for different cfg-files can be downloaded from (smaller -> faster & lower quality): |
| | | * `yolo.cfg` (256 MB COCO-model) - require 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo.weights |
| | | * `yolo-voc.cfg` (256 MB VOC-model) - require 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo-voc.weights |
| | | * `yolo.cfg` (194 MB COCO-model) - require 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo.weights |
| | | * `yolo-voc.cfg` (194 MB VOC-model) - require 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo-voc.weights |
| | | * `tiny-yolo.cfg` (60 MB COCO-model) - require 1 GB GPU-RAM: http://pjreddie.com/media/files/tiny-yolo.weights |
| | | * `tiny-yolo-voc.cfg` (60 MB VOC-model) - require 1 GB GPU-RAM: http://pjreddie.com/media/files/tiny-yolo-voc.weights |
| | | * `yolo9000.cfg` (186 MB Yolo9000-model) - require 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo9000.weights |
| | | |
| | | Put it near compiled: darknet.exe |
| | | |
| | |
| | | |
| | | ##### Example of usage in cmd-files from `build\darknet\x64\`: |
| | | |
| | | * `darknet_voc.cmd` - initialization with 256 MB VOC-model yolo-voc.weights & yolo-voc.cfg and waiting for entering the name of the image file |
| | | * `darknet_demo_voc.cmd` - initialization with 256 MB VOC-model yolo-voc.weights & yolo-voc.cfg and play your video file which you must rename to: test.mp4, and store result to: test_dnn_out.avi |
| | | * `darknet_net_cam_voc.cmd` - initialization with 256 MB VOC-model, play video from network video-camera mjpeg-stream (also from you phone) and store result to: test_dnn_out.avi |
| | | * `darknet_web_cam_voc.cmd` - initialization with 256 MB VOC-model, play video from Web-Camera number #0 and store result to: test_dnn_out.avi |
| | | * `darknet_voc.cmd` - initialization with 194 MB VOC-model yolo-voc.weights & yolo-voc.cfg and waiting for entering the name of the image file |
| | | * `darknet_demo_voc.cmd` - initialization with 194 MB VOC-model yolo-voc.weights & yolo-voc.cfg and play your video file which you must rename to: test.mp4 |
| | | * `darknet_demo_store.cmd` - initialization with 194 MB VOC-model yolo-voc.weights & yolo-voc.cfg and play your video file which you must rename to: test.mp4, and store result to: res.avi |
| | | * `darknet_net_cam_voc.cmd` - initialization with 194 MB VOC-model, play video from network video-camera mjpeg-stream (also from you phone) |
| | | * `darknet_web_cam_voc.cmd` - initialization with 194 MB VOC-model, play video from Web-Camera number #0 |
| | | * `darknet_coco_9000.cmd` - initialization with 186 MB Yolo9000 COCO-model, and show detection on the image: dog.jpg |
| | | * `darknet_coco_9000_demo.cmd` - initialization with 186 MB Yolo9000 COCO-model, and show detection on the video (if it is present): street4k.mp4, and store result to: res.avi |
| | | |
| | | ##### How to use on the command line: |
| | | * 256 MB COCO-model - image: `darknet.exe detector test data/coco.data yolo.cfg yolo.weights -i 0 -thresh 0.2` |
| | | * Alternative method 256 MB COCO-model - image: `darknet.exe detect yolo.cfg yolo.weights -i 0 -thresh 0.2` |
| | | * 256 MB VOC-model - image: `darknet.exe detector test data/voc.data yolo-voc.cfg yolo-voc.weights -i 0` |
| | | * 256 MB COCO-model - video: `darknet.exe detector demo data/coco.data yolo.cfg yolo.weights test.mp4 -i 0` |
| | | * 256 MB VOC-model - video: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights test.mp4 -i 0` |
| | | * Alternative method 256 MB VOC-model - video: `darknet.exe yolo demo yolo-voc.cfg yolo-voc.weights test.mp4 -i 0` |
| | | |
| | | On Linux use `./darknet` instead of `darknet.exe`, like this:`./darknet detector test ./cfg/coco.data ./cfg/yolo.cfg ./yolo.weights` |
| | | |
| | | * 194 MB COCO-model - image: `darknet.exe detector test data/coco.data yolo.cfg yolo.weights -i 0 -thresh 0.2` |
| | | * Alternative method 194 MB COCO-model - image: `darknet.exe detect yolo.cfg yolo.weights -i 0 -thresh 0.2` |
| | | * 194 MB VOC-model - image: `darknet.exe detector test data/voc.data yolo-voc.cfg yolo-voc.weights -i 0` |
| | | * 194 MB COCO-model - video: `darknet.exe detector demo data/coco.data yolo.cfg yolo.weights test.mp4 -i 0` |
| | | * 194 MB VOC-model - video: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights test.mp4 -i 0` |
| | | * 194 MB COCO-model - **save result to the file res.avi**: `darknet.exe detector demo data/coco.data yolo.cfg yolo.weights test.mp4 -i 0 -out_filename res.avi` |
| | | * 194 MB VOC-model - **save result to the file res.avi**: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights test.mp4 -i 0 -out_filename res.avi` |
| | | * Alternative method 194 MB VOC-model - video: `darknet.exe yolo demo yolo-voc.cfg yolo-voc.weights test.mp4 -i 0` |
| | | * 60 MB VOC-model for video: `darknet.exe detector demo data/voc.data tiny-yolo-voc.cfg tiny-yolo-voc.weights test.mp4 -i 0` |
| | | * 256 MB COCO-model for net-videocam - Smart WebCam: `darknet.exe detector demo data/coco.data yolo.cfg yolo.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0` |
| | | * 256 MB VOC-model for net-videocam - Smart WebCam: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0` |
| | | * 256 MB VOC-model - WebCamera #0: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights -c 0` |
| | | * 194 MB COCO-model for net-videocam - Smart WebCam: `darknet.exe detector demo data/coco.data yolo.cfg yolo.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0` |
| | | * 194 MB VOC-model for net-videocam - Smart WebCam: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0` |
| | | * 194 MB VOC-model - WebCamera #0: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights -c 0` |
| | | * 186 MB Yolo9000 - image: `darknet.exe detector test cfg/combine9k.data yolo9000.cfg yolo9000.weights` |
| | | * 186 MB Yolo9000 - video: `darknet.exe detector demo cfg/combine9k.data yolo9000.cfg yolo9000.weights test.mp4` |
| | | |
| | | ##### For using network video-camera mjpeg-stream with any Android smartphone: |
| | | |
| | |
| | | 4. Replace the address below, on shown in the phone application (Smart WebCam) and launch: |
| | | |
| | | |
| | | * 256 MB COCO-model: `darknet.exe detector demo data/coco.data yolo.cfg yolo.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0` |
| | | * 256 MB VOC-model: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0` |
| | | * 194 MB COCO-model: `darknet.exe detector demo data/coco.data yolo.cfg yolo.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0` |
| | | * 194 MB VOC-model: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0` |
| | | |
| | | ### How to compile on Linux: |
| | | |
| | | Just do `make` in the darknet directory. |
| | | Before make, you can set such options in the `Makefile`: [link](https://github.com/AlexeyAB/darknet/blob/9c1b9a2cf6363546c152251be578a21f3c3caec6/Makefile#L1) |
| | | * `GPU=1` to build with CUDA to accelerate by using GPU (CUDA should be in `/use/local/cuda`) |
| | | * `CUDNN=1` to build with cuDNN v5/v6 to accelerate training by using GPU (cuDNN should be in `/usr/local/cudnn`) |
| | | * `OPENCV=1` to build with OpenCV 3.x/2.4.x - allows to detect on video files and video streams from network cameras or web-cams |
| | | * `DEBUG=1` to bould debug version of Yolo |
| | | * `OPENMP=1` to build with OpenMP support to accelerate Yolo by using multi-core CPU |
| | | * `LIBSO=1` to build a library `darknet.so` and binary runable file `uselib` that uses this library. How to use this SO-library from your own code - you can look at C++ example: https://github.com/AlexeyAB/darknet/blob/master/src/yolo_console_dll.cpp |
| | | |
| | | |
| | | ### How to compile: |
| | | ### How to compile on Windows: |
| | | |
| | | 1. If you have MSVS 2015, CUDA 8.0 and OpenCV 2.4.9 (with paths: `C:\opencv_2.4.9\opencv\build\include` & `C:\opencv_2.4.9\opencv\build\x64\vc12\lib` or `vc14\lib`), then start MSVS, open `build\darknet\darknet.sln`, set **x64** and **Release**, and do the: Build -> Build darknet |
| | | 1. If you have **MSVS 2015, CUDA 8.0 and OpenCV 3.0** (with paths: `C:\opencv_3.0\opencv\build\include` & `C:\opencv_3.0\opencv\build\x64\vc14\lib`), then start MSVS, open `build\darknet\darknet.sln`, set **x64** and **Release**, and do the: Build -> Build darknet |
| | | |
| | | 1.1. Find files `opencv_core249.dll`, `opencv_highgui249.dll` and `opencv_ffmpeg249_64.dll` in `C:\opencv_2.4.9\opencv\build\x64\vc12\bin` or `vc14\bin` and put it near with `darknet.exe` |
| | | 1.1. Find files `opencv_world320.dll` and `opencv_ffmpeg320_64.dll` in `C:\opencv_3.0\opencv\build\x64\vc14\bin` and put it near with `darknet.exe` |
| | | |
| | | 2. If you have other version of CUDA (not 8.0) then open `build\darknet\darknet.vcxproj` by using Notepad, find 2 places with "CUDA 8.0" and change it to your CUDA-version, then do step 1 |
| | | 2. If you have other version of **CUDA (not 8.0)** then open `build\darknet\darknet.vcxproj` by using Notepad, find 2 places with "CUDA 8.0" and change it to your CUDA-version, then do step 1 |
| | | |
| | | 3. If you have other version of OpenCV 2.4.x (not 2.4.9) then you should change pathes after `\darknet.sln` is opened |
| | | 3. If you **don't have GPU**, but have **MSVS 2015 and OpenCV 3.0** (with paths: `C:\opencv_3.0\opencv\build\include` & `C:\opencv_3.0\opencv\build\x64\vc14\lib`), then start MSVS, open `build\darknet\darknet_no_gpu.sln`, set **x64** and **Release**, and do the: Build -> Build darknet |
| | | |
| | | 3.1 (right click on project) -> properties -> C/C++ -> General -> Additional Include Directories |
| | | 4. If you have **OpenCV 2.4.13** instead of 3.0 then you should change pathes after `\darknet.sln` is opened |
| | | |
| | | 4.1 (right click on project) -> properties -> C/C++ -> General -> Additional Include Directories: `C:\opencv_2.4.13\opencv\build\include` |
| | | |
| | | 3.2 (right click on project) -> properties -> Linker -> General -> Additional Library Directories |
| | | 4.2 (right click on project) -> properties -> Linker -> General -> Additional Library Directories: `C:\opencv_2.4.13\opencv\build\x64\vc14\lib` |
| | | |
| | | 3.3 Open file: `\src\yolo.c` and change 3 lines to your OpenCV-version - `249` (for 2.4.9), `2413` (for 2.4.13), ... : |
| | | 5. If you have other version of OpenCV 2.4.x (not 3.x) then you also should change lines like `#pragma comment(lib, "opencv_core2413.lib")` in the file `\src\detector.c` |
| | | |
| | | * `#pragma comment(lib, "opencv_core249.lib")` |
| | | * `#pragma comment(lib, "opencv_imgproc249.lib")` |
| | | * `#pragma comment(lib, "opencv_highgui249.lib")` |
| | | |
| | | |
| | | 4. If you have other version of OpenCV 3.x (not 2.4.x) then you should change many places in code by yourself. |
| | | |
| | | 5. If you want to build with CUDNN to speed up then: |
| | | 6. If you want to build with CUDNN to speed up then: |
| | | |
| | | * download and install CUDNN: https://developer.nvidia.com/cudnn |
| | | * download and install **cuDNN 6.0 for CUDA 8.0**: https://developer.nvidia.com/cudnn |
| | | |
| | | * add Windows system variable `cudnn` with path to CUDNN: https://hsto.org/files/a49/3dc/fc4/a493dcfc4bd34a1295fd15e0e2e01f26.jpg |
| | | |
| | |
| | | |
| | | ### How to compile (custom): |
| | | |
| | | Also, you can to create your own `darknet.sln` & `darknet.vcxproj`, this example for CUDA 8.0 and OpenCV 2.4.9 |
| | | Also, you can to create your own `darknet.sln` & `darknet.vcxproj`, this example for CUDA 8.0 and OpenCV 3.0 |
| | | |
| | | Then add to your created project: |
| | | - (right click on project) -> properties -> C/C++ -> General -> Additional Include Directories, put here: |
| | | |
| | | `C:\opencv_2.4.9\opencv\build\include;..\..\3rdparty\include;%(AdditionalIncludeDirectories);$(CudaToolkitIncludeDir);$(cudnn)\include` |
| | | `C:\opencv_3.0\opencv\build\include;..\..\3rdparty\include;%(AdditionalIncludeDirectories);$(CudaToolkitIncludeDir);$(cudnn)\include` |
| | | - (right click on project) -> Build dependecies -> Build Customizations -> set check on CUDA 8.0 or what version you have - for example as here: http://devblogs.nvidia.com/parallelforall/wp-content/uploads/2015/01/VS2013-R-5.jpg |
| | | - add to project all .c & .cu files from `\src` |
| | | - (right click on project) -> properties -> Linker -> General -> Additional Library Directories, put here: |
| | | |
| | | `C:\opencv_2.4.9\opencv\build\x64\vc12\lib;$(CUDA_PATH)lib\$(PlatformName);$(cudnn)\lib\x64;%(AdditionalLibraryDirectories)` |
| | | `C:\opencv_3.0\opencv\build\x64\vc14\lib;$(CUDA_PATH)lib\$(PlatformName);$(cudnn)\lib\x64;%(AdditionalLibraryDirectories)` |
| | | - (right click on project) -> properties -> Linker -> Input -> Additional dependecies, put here: |
| | | |
| | | `..\..\3rdparty\lib\x64\pthreadVC2.lib;cublas.lib;curand.lib;cudart.lib;cudnn.lib;%(AdditionalDependencies)` |
| | | - (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions |
| | | |
| | | - open file: `\src\yolo.c` and change 3 lines to your OpenCV-version - `249` (for 2.4.9), `2413` (for 2.4.13), ... : |
| | | `OPENCV;_TIMESPEC_DEFINED;_CRT_SECURE_NO_WARNINGS;_CRT_RAND_S;WIN32;NDEBUG;_CONSOLE;_LIB;%(PreprocessorDefinitions)` |
| | | |
| | | * `#pragma comment(lib, "opencv_core249.lib")` |
| | | * `#pragma comment(lib, "opencv_imgproc249.lib")` |
| | | * `#pragma comment(lib, "opencv_highgui249.lib")` |
| | | - open file: `\src\detector.c` and check lines `#pragma` and `#inclue` for OpenCV. |
| | | |
| | | `OPENCV;_TIMESPEC_DEFINED;_CRT_SECURE_NO_WARNINGS;GPU;WIN32;NDEBUG;_CONSOLE;_LIB;%(PreprocessorDefinitions)` |
| | | - compile to .exe (X64 & Release) and put .dll-s near with .exe: |
| | | |
| | | `pthreadVC2.dll, pthreadGC2.dll` from \3rdparty\dll\x64 |
| | | * `pthreadVC2.dll, pthreadGC2.dll` from \3rdparty\dll\x64 |
| | | |
| | | `cusolver64_80.dll, curand64_80.dll, cudart64_80.dll, cublas64_80.dll` - 80 for CUDA 8.0 or your version, from C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin |
| | | * `cusolver64_80.dll, curand64_80.dll, cudart64_80.dll, cublas64_80.dll` - 80 for CUDA 8.0 or your version, from C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin |
| | | |
| | | * For OpenCV 3.0: `opencv_world320.dll` and `opencv_ffmpeg320_64.dll` from `C:\opencv_3.0\opencv\build\x64\vc14\bin` |
| | | * For OpenCV 2.4.13: `opencv_core249.dll`, `opencv_highgui249.dll` and `opencv_ffmpeg249_64.dll` from `C:\opencv_2.4.9\opencv\build\x64\vc14\bin` |
| | | |
| | | ## How to train (Pascal VOC Data): |
| | | |
| | |
| | | |
| | | 5. Run command: `type 2007_train.txt 2007_val.txt 2012_*.txt > train.txt` |
| | | |
| | | 6. Start training by using `train_voc.cmd` or by using the command line: `darknet.exe detector train data/voc.data yolo-voc.cfg darknet19_448.conv.23` |
| | | 6. Set `batch=64` and `subdivisions=8` in the file `yolo-voc.2.0.cfg`: [link](https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/yolo-voc.2.0.cfg#L2) |
| | | |
| | | 7. Start training by using `train_voc.cmd` or by using the command line: `darknet.exe detector train data/voc.data yolo-voc.2.0.cfg darknet19_448.conv.23` |
| | | |
| | | If required change pathes in the file `build\darknet\x64\data\voc.data` |
| | | |
| | |
| | | |
| | | ## How to train with multi-GPU: |
| | | |
| | | 1. Train it first on 1 GPU for like 1000 iterations: `darknet.exe detector train data/voc.data yolo-voc.cfg darknet19_448.conv.23` |
| | | 1. Train it first on 1 GPU for like 1000 iterations: `darknet.exe detector train data/voc.data yolo-voc.2.0.cfg darknet19_448.conv.23` |
| | | |
| | | 2. Then stop and by using partially-trained model `/backup/yolo-voc_1000.weights` run training with multigpu (up to 4 GPUs): `darknet.exe detector train data/voc.data yolo-voc.cfg yolo-voc_1000.weights -gpus 0,1,2,3` |
| | | 2. Then stop and by using partially-trained model `/backup/yolo-voc_1000.weights` run training with multigpu (up to 4 GPUs): `darknet.exe detector train data/voc.data yolo-voc.2.0.cfg yolo-voc_1000.weights -gpus 0,1,2,3` |
| | | |
| | | https://groups.google.com/d/msg/darknet/NbJqonJBTSY/Te5PfIpuCAAJ |
| | | |
| | | ## How to train (to detect your custom objects): |
| | | |
| | | 1. Create file `yolo-obj.cfg` with the same content as in `yolo-voc.cfg` (or copy `yolo-voc.cfg` to `yolo-obj.cfg)` and: |
| | | 1. Create file `yolo-obj.cfg` with the same content as in `yolo-voc.2.0.cfg` (or copy `yolo-voc.2.0.cfg` to `yolo-obj.cfg)` and: |
| | | |
| | | * change line batch to [`batch=64`](https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/yolo-voc.2.0.cfg#L2) |
| | | * change line subdivisions to [`subdivisions=8`](https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/yolo-voc.2.0.cfg#L3) |
| | | * change line `classes=20` to your number of objects |
| | | * change line #224 from [`filters=125`](https://github.com/AlexeyAB/darknet/blob/master/cfg/yolo-voc.cfg#L224) to `filters=(classes + 5)*5` (generally this depends on the `num` and `coords`, i.e. equal to `(classes + coords + 1)*num`) |
| | | * change line #237 from [`filters=125`](https://github.com/AlexeyAB/darknet/blob/master/cfg/yolo-voc.2.0.cfg#L224) to `filters=(classes + 5)*5` (generally this depends on the `num` and `coords`, i.e. equal to `(classes + coords + 1)*num`) |
| | | |
| | | For example, for 2 objects, your file `yolo-obj.cfg` should differ from `yolo-voc.cfg` in such lines: |
| | | For example, for 2 objects, your file `yolo-obj.cfg` should differ from `yolo-voc.2.0.cfg` in such lines: |
| | | |
| | | ``` |
| | | [convolutional] |
| | |
| | | |
| | | ``` |
| | | classes= 2 |
| | | train = train.txt |
| | | valid = test.txt |
| | | names = obj.names |
| | | train = data/train.txt |
| | | valid = data/test.txt |
| | | names = data/obj.names |
| | | backup = backup/ |
| | | ``` |
| | | |
| | |
| | | |
| | | 8. Start training by using the command line: `darknet.exe detector train data/obj.data yolo-obj.cfg darknet19_448.conv.23` |
| | | |
| | | (file `yolo-obj_xxx.weights` will be saved to the `build\darknet\x64\backup\` for each 100 iterations until 1000 iterations has been reached, and after for each 1000 iterations) |
| | | |
| | | 9. After training is complete - get result `yolo-obj_final.weights` from path `build\darknet\x64\backup\` |
| | | |
| | | * After each 1000 iterations you can stop and later start training from this point. For example, after 2000 iterations you can stop training, and later just copy `yolo-obj_2000.weights` from `build\darknet\x64\backup\` to `build\darknet\x64\` and start training using: `darknet.exe detector train data/obj.data yolo-obj.cfg yolo-obj_2000.weights` |
| | |
| | | * **9002** - iteration number (number of batch) |
| | | * **0.060730 avg** - average loss (error) - **the lower, the better** |
| | | |
| | | When you see that average loss **0.060730 avg** enough low at many iterations and no longer decreases then you should stop training. |
| | | When you see that average loss **0.xxxxxx avg** no longer decreases at many iterations then you should stop training. |
| | | |
| | | 2. Once training is stopped, you should take some of last `.weights`-files from `darknet\build\darknet\x64\backup` and choose the best of them: |
| | | |
| | |
| | | |
| | |  |
| | | |
| | | 2.1. At first, you should put filenames of validation images to file `data\voc.2007.test` (format as in `train.txt`) or if you haven't validation images - simply copy `data\train.txt` to `data\voc.2007.test`. |
| | | To get weights from Early Stopping Point: |
| | | |
| | | 2.1. At first, in your file `obj.data` you must specify the path to the validation dataset `valid = valid.txt` (format of `valid.txt` as in `train.txt`), and if you haven't validation images, just copy `data\train.txt` to `data\valid.txt`. |
| | | |
| | | 2.2 If training is stopped after 9000 iterations, to validate some of previous weights use this commands: |
| | | |
| | |
| | | > 7586 7612 7689 RPs/Img: 68.23 **IOU: 77.86%** Recall:99.00% |
| | | |
| | | * **IOU** - the bigger, the better (says about accuracy) - **better to use** |
| | | * **Recall** - the bigger, the better (says about accuracy) |
| | | * **Recall** - the bigger, the better (says about accuracy) - actually Yolo calculates true positives, so it shouldn't be used |
| | | |
| | | For example, **bigger IUO** gives weights `yolo-obj_8000.weights` - then **use this weights for detection**. |
| | | For example, **bigger IOU** gives weights `yolo-obj_8000.weights` - then **use this weights for detection**. |
| | | |
| | | |
| | |  |
| | |
| | | ## How to improve object detection: |
| | | |
| | | 1. Before training: |
| | | * set flag `random=1` in your `.cfg`-file - it will increase precision by training Yolo for different resolutions: [link](https://github.com/AlexeyAB/darknet/blob/47409529d0eb935fa7bafbe2b3484431117269f5/cfg/yolo-voc.cfg#L244) |
| | | * set flag `random=1` in your `.cfg`-file - it will increase precision by training Yolo for different resolutions: [link]https://github.com/AlexeyAB/darknet/blob/master/cfg/yolo-voc.2.0.cfg#L244) |
| | | |
| | | * desirable that your training dataset include images with objects at diffrent: scales, rotations, lightings, from different sides |
| | | |
| | | 2. After training - for detection: |
| | | |
| | | * Increase network-resolution by set in your `.cfg`-file (`height=608` and `width=608`) or (`height=832` and `width=832`) or (any value multiple of 32) - this increases the precision and makes it possible to detect small objects: [link](https://github.com/AlexeyAB/darknet/blob/47409529d0eb935fa7bafbe2b3484431117269f5/cfg/yolo-voc.cfg#L4) |
| | | * Increase network-resolution by set in your `.cfg`-file (`height=608` and `width=608`) or (`height=832` and `width=832`) or (any value multiple of 32) - this increases the precision and makes it possible to detect small objects: [link](https://github.com/AlexeyAB/darknet/blob/master/cfg/yolo-voc.2.0.cfg#L4) |
| | | |
| | | * you do not need to train the network again, just use `.weights`-file already trained for 416x416 resolution |
| | | * if error `Out of memory` occurs then in `.cfg`-file you should increase `subdivisions=16`, 32 or 64: [link](https://github.com/AlexeyAB/darknet/blob/47409529d0eb935fa7bafbe2b3484431117269f5/cfg/yolo-voc.cfg#L3) |
| | | * if error `Out of memory` occurs then in `.cfg`-file you should increase `subdivisions=16`, 32 or 64: [link](https://github.com/AlexeyAB/darknet/blob/master/cfg/yolo-voc.2.0.cfg#L3) |
| | | |
| | | ## How to mark bounded boxes of objects and create annotation files: |
| | | |
| | |
| | | * after launching your console application and entering the image file name - you will see info for each object: |
| | | `<obj_id> <left_x> <top_y> <width> <height> <probability>` |
| | | * to use simple OpenCV-GUI you should uncomment line `//#define OPENCV` in `yolo_console_dll.cpp`-file: [link](https://github.com/AlexeyAB/darknet/blob/a6cbaeecde40f91ddc3ea09aa26a03ab5bbf8ba8/src/yolo_console_dll.cpp#L5) |
| | | * you can see source code of simple example for detection on the video file: [link](https://github.com/AlexeyAB/darknet/blob/ab1c5f9e57b4175f29a6ef39e7e68987d3e98704/src/yolo_console_dll.cpp#L75) |
| | | |
| | | `yolo_cpp_dll.dll`-API: [link](https://github.com/AlexeyAB/darknet/blob/master/src/yolo_v2_class.hpp#L42) |
| | | ``` |
| | | class Detector { |
| | | public: |
| | | Detector(std::string cfg_filename, std::string weight_filename, int gpu_id = 0); |
| | | ~Detector(); |
| | | |
| | | std::vector<bbox_t> detect(std::string image_filename, float thresh = 0.2, bool use_mean = false); |
| | | std::vector<bbox_t> detect(image_t img, float thresh = 0.2, bool use_mean = false); |
| | | static image_t load_image(std::string image_filename); |
| | | static void free_image(image_t m); |
| | | |
| | | #ifdef OPENCV |
| | | std::vector<bbox_t> detect(cv::Mat mat, float thresh = 0.2, bool use_mean = false); |
| | | #endif |
| | | }; |
| | | ``` |