| | |
| | | # Yolo-v2 Windows and Linux version |
| | | |
| | | [](https://circleci.com/gh/AlexeyAB/darknet) |
| | | |
| | | 1. [How to use](#how-to-use) |
| | | 2. [How to compile on Linux](#how-to-compile-on-linux) |
| | | 3. [How to compile on Windows](#how-to-compile-on-windows) |
| | |
| | | * 194 MB VOC-model - WebCamera #0: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights -c 0` |
| | | * 186 MB Yolo9000 - image: `darknet.exe detector test cfg/combine9k.data yolo9000.cfg yolo9000.weights` |
| | | * 186 MB Yolo9000 - video: `darknet.exe detector demo cfg/combine9k.data yolo9000.cfg yolo9000.weights test.mp4` |
| | | * To process a list of images `image_list.txt` and save results of detection to `result.txt` use: |
| | | `darknet.exe detector test data/voc.data yolo-voc.cfg yolo-voc.weights < image_list.txt > result.txt` |
| | | You can comment this line so that each image does not require pressing the button ESC: https://github.com/AlexeyAB/darknet/blob/6ccb41808caf753feea58ca9df79d6367dedc434/src/detector.c#L509 |
| | | |
| | | ##### For using network video-camera mjpeg-stream with any Android smartphone: |
| | | |
| | |
| | | |
| | | Just do `make` in the darknet directory. |
| | | Before make, you can set such options in the `Makefile`: [link](https://github.com/AlexeyAB/darknet/blob/9c1b9a2cf6363546c152251be578a21f3c3caec6/Makefile#L1) |
| | | * `GPU=1` to build with CUDA to accelerate by using GPU (CUDA should be in `/use/local/cuda`) |
| | | * `GPU=1` to build with CUDA to accelerate by using GPU (CUDA should be in `/usr/local/cuda`) |
| | | * `CUDNN=1` to build with cuDNN v5/v6 to accelerate training by using GPU (cuDNN should be in `/usr/local/cudnn`) |
| | | * `OPENCV=1` to build with OpenCV 3.x/2.4.x - allows to detect on video files and video streams from network cameras or web-cams |
| | | * `DEBUG=1` to bould debug version of Yolo |
| | |
| | | |
| | | 4.2 (right click on project) -> properties -> Linker -> General -> Additional Library Directories: `C:\opencv_2.4.13\opencv\build\x64\vc14\lib` |
| | | |
| | | 5. If you have other version of OpenCV 2.4.x (not 3.x) then you also should change lines like `#pragma comment(lib, "opencv_core2413.lib")` in the file `\src\detector.c` |
| | | |
| | | 6. If you want to build with CUDNN to speed up then: |
| | | 5. If you want to build with CUDNN to speed up then: |
| | | |
| | | * download and install **cuDNN 6.0 for CUDA 8.0**: https://developer.nvidia.com/cudnn |
| | | |
| | |
| | | |
| | | * `cusolver64_80.dll, curand64_80.dll, cudart64_80.dll, cublas64_80.dll` - 80 for CUDA 8.0 or your version, from C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin |
| | | |
| | | * For OpenCV 3.0: `opencv_world320.dll` and `opencv_ffmpeg320_64.dll` from `C:\opencv_3.0\opencv\build\x64\vc14\bin` |
| | | * For OpenCV 2.4.13: `opencv_core249.dll`, `opencv_highgui249.dll` and `opencv_ffmpeg249_64.dll` from `C:\opencv_2.4.9\opencv\build\x64\vc14\bin` |
| | | * For OpenCV 3.X: `opencv_world320.dll` and `opencv_ffmpeg320_64.dll` from `C:\opencv_3.0\opencv\build\x64\vc14\bin` |
| | | * For OpenCV 2.4.13: `opencv_core2413.dll`, `opencv_highgui2413.dll` and `opencv_ffmpeg2413_64.dll` from `C:\opencv_2.4.13\opencv\build\x64\vc14\bin` |
| | | |
| | | ## How to train (Pascal VOC Data): |
| | | |
| | |
| | | * change line batch to [`batch=64`](https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/yolo-voc.2.0.cfg#L2) |
| | | * change line subdivisions to [`subdivisions=8`](https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/yolo-voc.2.0.cfg#L3) |
| | | * change line `classes=20` to your number of objects |
| | | * change line #237 from [`filters=125`](https://github.com/AlexeyAB/darknet/blob/master/cfg/yolo-voc.2.0.cfg#L224) to `filters=(classes + 5)*5` (generally this depends on the `num` and `coords`, i.e. equal to `(classes + coords + 1)*num`) |
| | | * change line #237 from [`filters=125`](https://github.com/AlexeyAB/darknet/blob/master/cfg/yolo-voc.2.0.cfg#L224) to: filters=(classes + 5)*5, so if `classes=2` then should be `filter=35` |
| | | |
| | | (Generally `filters` depends on the `classes`, `num` and `coords`, i.e. equal to `(classes + coords + 1)*num`) |
| | | |
| | | For example, for 2 objects, your file `yolo-obj.cfg` should differ from `yolo-voc.2.0.cfg` in such lines: |
| | | So for example, for 2 objects, your file `yolo-obj.cfg` should differ from `yolo-voc.2.0.cfg` in such lines: |
| | | |
| | | ``` |
| | | [convolutional] |
| | |
| | | |
| | |  |
| | | |
| | | How to calculate **mAP** [voc_eval.py](https://github.com/AlexeyAB/darknet/blob/master/scripts/voc_eval.py) or [datascience.stackexchange link](https://datascience.stackexchange.com/questions/16797/what-does-the-notation-map-5-95-mean) |
| | | |
| | | ### Custom object detection: |
| | | |
| | | Example of custom object detection: `darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_8000.weights` |