| | |
| | |  |
| | | |  |  https://arxiv.org/abs/1612.08242 | |
| | | |---|---| |
| | | |
| | | |
| | | # Yolo-Windows v2 |
| | | # "You Only Look Once: Unified, Real-Time Object Detection (version 2)" |
| | |
| | | 3.1 (right click on project) -> properties -> C/C++ -> General -> Additional Include Directories |
| | | |
| | | 3.2 (right click on project) -> properties -> Linker -> General -> Additional Library Directories |
| | | |
| | | 3.3 Open file: `\src\yolo.c` and change 3 lines to your OpenCV-version - `249` (for 2.4.9), `2413` (for 2.4.13), ... : |
| | | |
| | | * `#pragma comment(lib, "opencv_core249.lib")` |
| | | * `#pragma comment(lib, "opencv_imgproc249.lib")` |
| | | * `#pragma comment(lib, "opencv_highgui249.lib")` |
| | | |
| | | |
| | | 4. If you have other version of OpenCV 3.x (not 2.4.x) then you should change many places in code by yourself. |
| | | |
| | |
| | | - (right click on project) -> properties -> C/C++ -> General -> Additional Include Directories, put here: |
| | | |
| | | `C:\opencv_2.4.9\opencv\build\include;..\..\3rdparty\include;%(AdditionalIncludeDirectories);$(CudaToolkitIncludeDir);$(cudnn)\include` |
| | | - right click on project -> Build dependecies -> Build Customizations -> set check on CUDA 8.0 or what version you have - for example as here: http://devblogs.nvidia.com/parallelforall/wp-content/uploads/2015/01/VS2013-R-5.jpg |
| | | - (right click on project) -> Build dependecies -> Build Customizations -> set check on CUDA 8.0 or what version you have - for example as here: http://devblogs.nvidia.com/parallelforall/wp-content/uploads/2015/01/VS2013-R-5.jpg |
| | | - add to project all .c & .cu files from `\src` |
| | | - (right click on project) -> properties -> Linker -> General -> Additional Library Directories, put here: |
| | | - (right click on project) -> properties -> Linker -> General -> Additional Library Directories, put here: |
| | | |
| | | `C:\opencv_2.4.9\opencv\build\x64\vc12\lib;$(CUDA_PATH)lib\$(PlatformName);$(cudnn)\lib\x64;%(AdditionalLibraryDirectories)` |
| | | - (right click on project) -> properties -> Linker -> Input -> Additional dependecies, put here: |
| | |
| | | `..\..\3rdparty\lib\x64\pthreadVC2.lib;cublas.lib;curand.lib;cudart.lib;cudnn.lib;%(AdditionalDependencies)` |
| | | - (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions |
| | | |
| | | - open file: `\src\yolo.c` and change 3 lines to your OpenCV-version - `249` (for 2.4.9), `2413` (for 2.4.13), ... : |
| | | |
| | | * `#pragma comment(lib, "opencv_core249.lib")` |
| | | * `#pragma comment(lib, "opencv_imgproc249.lib")` |
| | | * `#pragma comment(lib, "opencv_highgui249.lib")` |
| | | |
| | | `OPENCV;_TIMESPEC_DEFINED;_CRT_SECURE_NO_WARNINGS;GPU;WIN32;NDEBUG;_CONSOLE;_LIB;%(PreprocessorDefinitions)` |
| | | - compile to .exe (X64 & Release) and put .dll`s near with .exe: |
| | | - compile to .exe (X64 & Release) and put .dll-s near with .exe: |
| | | |
| | | `pthreadVC2.dll, pthreadGC2.dll` from \3rdparty\dll\x64 |
| | | |
| | | `cusolver64_80.dll, curand64_80.dll, cudart64_80.dll, cublas64_80.dll` - 80 for CUDA 8.0 or your version, from C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin |
| | | |
| | | |
| | | ## How to train (Pascal VOC Data): |
| | | |
| | | 1. Download pre-trained weights for the convolutional layers (76 MB): http://pjreddie.com/media/files/darknet19_448.conv.23 and put to the directory `build\darknet\x64` |
| | | |
| | | 2. Download The Pascal VOC Data and unpack it to directory `build\darknet\x64\data\voc`: http://pjreddie.com/projects/pascal-voc-dataset-mirror/ will be created file `voc_label.py` and `\VOCdevkit\` dir |
| | | |
| | | 3. Download and install Python for Windows: https://www.python.org/ftp/python/3.5.2/python-3.5.2-amd64.exe |
| | | |
| | | 4. Run command: `python build\darknet\x64\data\voc\voc_label.py` (to generate files: 2007_test.txt, 2007_train.txt, 2007_val.txt, 2012_train.txt, 2012_val.txt) |
| | | |
| | | 5. Run command: `type 2007_train.txt 2007_val.txt 2012_*.txt > train.txt` |
| | | |
| | | 6. Start training by using `train_voc.cmd` or by using the command line: `darknet.exe detector train data/voc.data yolo-voc.cfg darknet19_448.conv.23` |
| | | |
| | | If required change pathes in the file `build\darknet\x64\data\voc.data` |
| | | |
| | | More information about training by the link: http://pjreddie.com/darknet/yolo/#train-voc |
| | | |
| | | ## How to train with multi-GPU: |
| | | |
| | | 1. Train it first on 1 GPU for like 1000 iterations: `darknet.exe detector train data/voc.data yolo-voc.cfg darknet19_448.conv.23` |
| | | |
| | | 2. Then stop and by using partially-trained model `/backup/yolo-voc_1000.weights` run training with multigpu (up to 4 GPUs): `darknet.exe detector train data/voc.data yolo-voc.cfg yolo-voc_1000.weights -gpus 0,1,2,3` |
| | | |
| | | https://groups.google.com/d/msg/darknet/NbJqonJBTSY/Te5PfIpuCAAJ |
| | | |
| | | ## How to train (to detect your custom objects): |
| | | |
| | | 1. Create file `yolo-obj.cfg` with the same content as in `yolo-voc.cfg` (or copy `yolo-voc.cfg` to `yolo-obj.cfg)` and: |
| | | |
| | | * change line `classes=20` to your number of objects |
| | | * change line `filters=425` to `filters=(classes + 5)*5` (generally this depends on the `num` and `coords`, i.e. equal to `(classes + coords + 1)*num`) |
| | | |
| | | For example, for 2 objects, your file `yolo-obj.cfg` should differ from `yolo-voc.cfg` in such lines: |
| | | |
| | | ``` |
| | | [convolutional] |
| | | filters=35 |
| | | |
| | | [region] |
| | | classes=2 |
| | | ``` |
| | | |
| | | 2. Create file `obj.names` in the directory `build\darknet\x64\data\`, with objects names - each in new line |
| | | |
| | | 3. Create file `obj.data` in the directory `build\darknet\x64\data\`, containing (where **classes = number of objects**): |
| | | |
| | | ``` |
| | | classes= 2 |
| | | train = train.txt |
| | | valid = test.txt |
| | | names = obj.names |
| | | backup = backup/ |
| | | ``` |
| | | |
| | | 4. Put image-files (.jpg) of your objects in the directory `build\darknet\x64\data\obj\` |
| | | |
| | | 5. Create `.txt`-file for each `.jpg`-image-file - with the same name, but with `.txt`-extension, and put to file: object number and object coordinates on this image, for each object in new line: `<object-class> <x> <y> <width> <height>` |
| | | |
| | | Where: |
| | | * `<object-class>` - integer number of object from `0` to `(classes-1)` |
| | | * `<x> <y> <width> <height>` - float values relative to width and height of image, it can be equal from 0.0 to 1.0 |
| | | * atention: `<x> <y>` - are center of rectangle (are not top-left corner) |
| | | |
| | | For example for `img1.jpg` you should create `img1.txt` containing: |
| | | |
| | | ``` |
| | | 1 0.716797 0.395833 0.216406 0.147222 |
| | | 0 0.687109 0.379167 0.255469 0.158333 |
| | | 1 0.420312 0.395833 0.140625 0.166667 |
| | | ``` |
| | | |
| | | 6. Create file `train.txt` in directory `build\darknet\x64\data\`, with filenames of your images, each filename in new line, with path relative to `darknet.exe`, for example containing: |
| | | |
| | | ``` |
| | | data/obj/img1.jpg |
| | | data/obj/img2.jpg |
| | | data/obj/img3.jpg |
| | | ``` |
| | | |
| | | 7. Download pre-trained weights for the convolutional layers (76 MB): http://pjreddie.com/media/files/darknet19_448.conv.23 and put to the directory `build\darknet\x64` |
| | | |
| | | 8. Start training by using the command line: `darknet.exe detector train data/obj.data yolo-obj.cfg darknet19_448.conv.23` |
| | | |
| | | 9. After training is complete - get result `yolo-obj_final.weights` from path `build\darknet\x64\backup\` |
| | | |
| | | * Also you can get result earlier than all 45000 iterations, for example, usually sufficient 2000 iterations for each class(object). I.e. for 6 classes to avoid overfitting - you can stop training after 12000 iterations and use `yolo-obj_12000.weights` to detection. |
| | | |
| | | ### Custom object detection: |
| | | |
| | | Example of custom object detection: `darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_3000.weights` |
| | | |
| | | |  |  | |
| | | |---|---| |
| | | |
| | | ## How to mark bounded boxes of objects and create annotation files: |
| | | |
| | | Here you can find repository with GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2: https://github.com/AlexeyAB/Yolo_mark |
| | | |
| | | With example of: `train.txt`, `obj.names`, `obj.data`, `yolo-obj.cfg`, `air`1-6`.txt`, `bird`1-4`.txt` for 2 classes of objects (air, bird) and `train_obj.cmd` with example how to train this image-set with Yolo v2 |