| | |
| | | # Yolo-Windows v2 |
| | | # Magic: The Gathering Card Detector |
| | | |
| | | 1. [How to use](#how-to-use) |
| | | 2. [How to compile](#how-to-compile) |
| | | 3. [How to train (Pascal VOC Data)](#how-to-train-pascal-voc-data) |
| | | 4. [How to train (to detect your custom objects)](#how-to-train-to-detect-your-custom-objects) |
| | | 5. [How to mark bounded boxes of objects and create annotation files](#how-to-mark-bounded-boxes-of-objects-and-create-annotation-files) |
| | | MTG Card Detector is a real-time application that can identify Magic: The Gathering playing cards from either an image or a video. It utilizes various computer vision techniques to process the input image, and uses [perceptual hashing](https://jenssegers.com/61/perceptual-image-hashes) to identify the detected image of the cards with the matching cards from the database of MTG cards. Refer to [opencv_dnn.py](https://github.com/hj3yoo/mtg_card_detector/blob/master/opencv_dnn.py) for more detailed implementation. |
| | | |
| | | |  |  https://arxiv.org/abs/1612.08242 | |
| | | **Demo:** |
| | | |
| | | [](https://www.youtube.com/watch?v=BZkRZDyhMRE "Demo #1") |
| | | |
| | | You can run the demo using the following: |
| | | |
| | | ``` |
| | | python3 opencv_dnn.py [-i path/to/input/file -o path/to/output/directory -hs (one of 16/32) -dsp -dbg -gph] |
| | | ``` |
| | | |
| | | Initially, the project used a powerful neural network named ['You Only Look Once (YOLO)'](https://arxiv.org/pdf/1506.02640v5.pdf) to detect individual cards, but it has been removed as of Oct 12th, 2018 [(note)](https://github.com/hj3yoo/mtg_card_detector#oct-12th-2018) in favour of classical CV techniques. |
| | | |
| | | **Demo:** |
| | | |
| | | [](https://www.youtube.com/watch?v=kFE_k-mWo2A "Demo #2") |
| | | |
| | | You can still find the files used to train them: |
| | | |
| | | - [tiny_yolo.cfg](https://github.com/hj3yoo/mtg_card_detector/blob/master/cfg/tiny_yolo.cfg) |
| | | - [tiny_yolo_final.weights](https://github.com/hj3yoo/mtg_card_detector/blob/master/weights/second_general/tiny_yolo_final.weights) |
| | | - [obj.data](https://github.com/hj3yoo/mtg_card_detector/blob/master/data/obj.data) and [obj.names](https://github.com/hj3yoo/mtg_card_detector/blob/master/data/obj.names) |
| | | - [fetch_data.py](https://github.com/hj3yoo/mtg_card_detector/blob/master/fetch_data.py): aggregates card images and database from [scryfall.com](https://scryfall.com/) |
| | | - [transform_data.py](https://github.com/hj3yoo/mtg_card_detector/blob/master/transform_data.py): generate training images using the aggregated card images and database |
| | | - [setup_train.py](https://github.com/hj3yoo/mtg_card_detector/blob/master/setup_train.py): create train.txt and test.txt required to train YOLO from the training dataset |
| | | |
| | | --------------------------------------------------------------------- |
| | | |
| | | ## Day ~0: Sep 6th, 2018 |
| | | |
| | | Uploading all the progresses on the model training for the last few days. |
| | | |
| | | First batch of model training is completed, where I used ~40,000 generated images of MTG cards laid out in one of the pre-defined pattern. |
| | | |
| | | <img src="https://github.com/hj3yoo/darknet/blob/master/figures/0_training_set_example_1.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/0_training_set_example_2.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/0_training_set_example_3.jpg" width="360"> |
| | | |
| | | After 5000 training epochs, the model got 88% validation accuracy on the generated test set. |
| | | |
| | | <img src="https://github.com/hj3yoo/darknet/blob/master/figures/0_detection_result_1.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/0_detection_result_2.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/0_detection_result_3.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/0_detection_result_4.jpg" width="360"> |
| | | |
| | | However, there are some blind spots on the model, notably: |
| | | |
| | | - Fails to spot some of the obscured cards, where only a fraction of them are shown. |
| | | - Fairly fragile against any glaring or light variations. |
| | | - Cannot detect any skewed cards. |
| | | |
| | | Example of bad detections: |
| | | |
| | | <img src="https://github.com/hj3yoo/darknet/blob/master/figures/0_detection_result_5.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/0_detection_result_6.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/0_detection_result_7.jpg" width="360"> |
| | | |
| | | The second and third problems should easily be solved by further augmenting the dataset with random lighting and image skew. I'll have to think more about the first problem, though. |
| | | |
| | | ## Sept 7th, 2018 |
| | | |
| | | Added several image augmentation techniques to apply to the training set: noise, dropout, light variation, and glaring: |
| | | |
| | | <img src="https://github.com/hj3yoo/darknet/blob/master/figures/1_augmented_set_example_1.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/1_augmented_set_example_2.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/1_augmented_set_example_3.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/1_augmented_set_example_4.jpg" width="360"> |
| | | |
| | | Currently trying to generate enough images to start model training. Hopefully this helps. |
| | | |
| | | Recompiled darknet with OpenCV and CUDNN installed, and recalculated anchors. |
| | | |
| | | ----------------------- |
| | | |
| | | I've ran a quick training with tiny_yolo configuration with new training data, and Voila! The model performs significantly better than the last iteration, even against some hard images with glaring & skew! The first prediction model can't detect anything from these new test images, so this is a huge improvement to the model :) |
| | | |
| | | <img src="https://github.com/hj3yoo/darknet/blob/master/figures/1_detection_result_1.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/1_decision_result_2.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/1_decision_result_3.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/1_decision_result_4.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/1_decision_result_5.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/1_decision_result_6.jpg" width="360"> |
| | | |
| | | <img src="https://github.com/hj3yoo/darknet/blob/master/figures/1_learning_curve.jpg" width="640"> |
| | | |
| | | The video demo can be found here: https://www.youtube.com/watch?v=kFE_k-mWo2A&feature=youtu.be |
| | | |
| | | |
| | | ## Sept 10th, 2018 |
| | | |
| | | I've been training a new model with a full YOLOv3 configuration (previous one used Tiny YOLOv3), and it's been taking a lot more resources: |
| | | |
| | | <img src="https://github.com/hj3yoo/darknet/blob/master/figures/2_learning_curve.jpg" width="640"> |
| | | |
| | | The author of darknet did mention that full network will take significantly more training effort, so I'll just have to wait. At this rate, it should reach 50k epoch in about a week :/ |
| | | |
| | | |
| | | ## Sept 13th, 2018 |
| | | |
| | | The training for full YOLOv3 model has turned sour - the loss saturated around 0.45, and didn't seem like it would improve in any reasonable amount of time. |
| | | |
| | | <img src="https://github.com/hj3yoo/darknet/blob/master/figures/3_learning_curve.jpg" width="640"> |
| | | |
| | | As expected, the performance of the model with 0.45 loss was fairly bad. Not to mention that it's quite slower, too. I've decided to continue with tiny YOLOv3 weights. I tried to train it further, but it was already saturated, and was the best it could get. |
| | | |
| | | --------------------- |
| | | |
| | | Bad news, I couldn't find any repo that has python wrapper for darknet to pursue this project further. There is a [python example](https://github.com/AlexeyAB/darknet/blob/master/darknet.py) in the original repo of this fork, but [it doesn't support video input](https://github.com/AlexeyAB/darknet/issues/955). Other darknet repos are in the same situation. |
| | | |
| | | I suppose there is a poor man's alternative - feed individual frames from the video into the detection script for image. I'll have to give it a shot. |
| | | |
| | | |
| | | ## Sept 14th, 2018 |
| | | |
| | | Thankfully, OpenCV had an implementation for DNN, which supports YOLO as well. They have done quite an amazing job, and the speed isn't too bad, either. I can score about 20~25fps on my tiny YOLO, without using GPU. |
| | | |
| | | |
| | | ## Sept 15th, 2018 |
| | | |
| | | I tried to do an alternate approach - instead of making model identify cards as annonymous, train the model for EVERY single card. As you may imagine, this isn't sustainable for 10000+ different cards that exists in MTG, but I thought it would be reasonable for classifying 10 different cards. |
| | | |
| | | Result? Suprisingly effective. |
| | | |
| | | <img src="https://github.com/hj3yoo/darknet/blob/master/figures/4_detection_result_1.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/4_detection_result_2.jpg" width="360"><img src="https://github.com/hj3yoo/darknet/blob/master/figures/4_detection_result_3.jpg" width="360"> <img src="https://github.com/hj3yoo/darknet/blob/master/figures/4_detection_result_4.png" width="360"> |
| | | |
| | | They're of course slightly worse than annonymous detection and impractical for any large number of cardbase, but it was an interesting approach. |
| | | |
| | | ------------------ |
| | | |
| | | I've made a quick openCV algorithm to extract cards from the image, and it works decently well: |
| | | |
| | | <img src="https://github.com/hj3yoo/darknet/blob/master/figures/4_detection_result_5.jpg" width="360"> |
| | | |
| | | At the moment, it's fairly limited - the entire card must be shown without obstruction nor cropping, otherwise it won't detect at all. |
| | | |
| | | Unfortunately, there is very little use case for my trained network in this algorithm. It's just using contour detection and perceptual hashing to match the card. |
| | | |
| | | |
| | | ## Sept 16th, 2018 |
| | | |
| | | I've tweaked the openCV algorithm from yesterday and ran for a demo: |
| | | |
| | | https://www.youtube.com/watch?v=BZkRZDyhMRE&feature=youtu.be |
| | | |
| | | ## Oct 4th, 2018 |
| | | |
| | | With the current model I have, there seems to be little hope - I simply don't have enough knowledge in classical CV technique to separate overlaying cards. Even if I could, perceptual hash will be harder to use if I were to use only a fraction of a card image to classify it. |
| | | |
| | | There is an alternative to venture into instance segmentation with [mask R-CNN](https://arxiv.org/pdf/1703.06870.pdf), at the cost of losing real-time processing speed (and considerably more development time). Maybe worth a shot, although I'd have to nearly start from scratch (other than training data generation). |
| | | |
| | | ## Oct 10th, 2018 |
| | | |
| | | I've been trying to fiddle with the mask R-CNN using [this repo](https://github.com/matterport/Mask_RCNN)'s implementation, and got to train them with 60 manually labelled image set. The result is not too bad considering such a small dataset was used. However, there was a high FP rate overall (again, probably because of small dataset and the simplistic features of cards). |
| | | |
| | | <img src="https://github.com/hj3yoo/mtg_card_detector/blob/master/figures/5_rcnn_result_1.png" width="360"><img src="https://github.com/hj3yoo/mtg_card_detector/blob/master/figures/5_rcnn_result_2.png" width="360"><img src="https://github.com/hj3yoo/mtg_card_detector/blob/master/figures/5_rcnn_result_3.png" width="360"><img src="https://github.com/hj3yoo/mtg_card_detector/blob/master/figures/5_rcnn_result_4.png" width="360"><img src="https://github.com/hj3yoo/mtg_card_detector/blob/master/figures/5_rcnn_result_5.png" width="360"> |
| | | |
| | | Although it may be worth to generate large training dataset and train the model more thoroughly, I'm being short on time, as there are other priorities to do. I may revisit this later. I will be cleaning this repo in the next few days, wrapping it up for now. |
| | | |
| | | ## Oct 12th, 2018 |
| | | |
| | | I've been able to significantly cut down the processing time of the current implementation. For n cards detected in the video, the latency has decreased from (65+50n)ms to (7+16n)ms. There were two major bottlenecks that was slowing the program down: |
| | | |
| | | -------------------------- |
| | | |
| | | In order to identify the card from the snippet of the card image, I'm using perceptual hashing. When the card is detected in YOLO, I compute its pHash value from its image, and compare it with the pHash of every cards in the database to find the match. This process has a speed of O(n * m), where n is the number of cards detected in the image and m is the number of cards in the database. With more than 10000 different cards printed in MTG history, this computation was the first bottleneck. For the 50ms increment per detected card mentioned above, majority of that time was spent trying to subtract two 1024-bit hashes 10000+ times - that's more than 10^10 comparisons right there! |
| | | |
| | | First, there were some overhead that was coming from the implementation of the library. The following is the elapsed time for subtracting pHash for all 10000 elements in pandas database: |
| | | |
| | | | hash_size | elapsed_time (ms) | |
| | | |---|---| |
| | | | 8 | 23.01 | |
| | | | 16 | 25.72 | |
| | | | 32 | 33.38 | |
| | | | 64 | 65.98 | |
| | | |
| | | If you plot them using (hash_size)^2 and elapsed_time, you get almost a linear graph with a huge constant y-intercept: |
| | | <img src="https://github.com/hj3yoo/mtg_card_detector/blob/master/figures/6_time_plot_1.png"> |
| | | |
| | | # "You Only Look Once: Unified, Real-Time Object Detection (version 2)" |
| | | A yolo windows version (for object detection) |
| | | Where is this fat constant of 22.4ms coming from? Well, we'd better look at how the [Imagehash library](https://github.com/JohannesBuchner/imagehash/blob/master/imagehash/__init__.py#L67-L72) deals with subtraction: |
| | | ``` |
| | | def __sub__(self, other): |
| | | if other is None: |
| | | raise TypeError('Other hash must not be None.') |
| | | if self.hash.size != other.hash.size: |
| | | raise TypeError('ImageHashes must be of the same shape.', self.hash.shape, other.hash.shape) |
| | | return numpy.count_nonzero(self.hash.flatten() != other.hash.flatten()) |
| | | ``` |
| | | The code flattens both hashes before comparison. You might think, "How slow is that going to be?" Apparently fair amount. |
| | | ``` |
| | | a = np.ones([1, 1024], dtype=np.bool) |
| | | start = time.time() |
| | | for _ in range(10000): |
| | | if a is None: |
| | | raise TypeError('Other hash must not be None.') |
| | | if a.size != a.size: |
| | | raise TypeError('ImageHashes must be of the same shape.', self.hash.shape, other.hash.shape) |
| | | a.flatten() |
| | | a.flatten() |
| | | end = time.time() |
| | | elapsed = (end - start) * 1000 |
| | | print('%f' % elapsed) |
| | | ``` |
| | | |
| | | Contributtors: https://github.com/pjreddie/darknet/graphs/contributors |
| | | The execution time of that code snippet on average is 11.65ms, which is slightly over half of 22.4ms of constant delay. That's a lot of time that can be cut out. |
| | | By pre-emptively flattening the hashes and using the hash subtraction's code (yes I know it's not a good OOP design, but this is too much of a tradeoff), that constant time can be cut out significantly: |
| | | |
| | | This repository is forked from Linux-version: https://github.com/pjreddie/darknet |
| | | |
| | | More details: http://pjreddie.com/darknet/yolo/ |
| | | |
| | | ##### Requires: |
| | | * **MS Visual Studio 2015 (v140)**: https://www.microsoft.com/download/details.aspx?id=48146 |
| | | * **CUDA 8.0 for Windows x64**: https://developer.nvidia.com/cuda-downloads |
| | | * **OpenCV 2.4.9**: https://sourceforge.net/projects/opencvlibrary/files/opencv-win/2.4.9/opencv-2.4.9.exe/download |
| | | - To compile without OpenCV - remove define OPENCV from: Visual Studio->Project->Properties->C/C++->Preprocessor |
| | | - To compile with different OpenCV version - change in file yolo.c each string look like **#pragma comment(lib, "opencv_core249.lib")** from 249 to required version. |
| | | - With OpenCV will show image or video detection in window and store result to: test_dnn_out.avi |
| | | |
| | | ##### Pre-trained models for different cfg-files can be downloaded from (smaller -> faster & lower quality): |
| | | * `yolo.cfg` (256 MB COCO-model) - require 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo.weights |
| | | * `yolo-voc.cfg` (256 MB VOC-model) - require 4 GB GPU-RAM: http://pjreddie.com/media/files/yolo-voc.weights |
| | | * `tiny-yolo.cfg` (60 MB COCO-model) - require 1 GB GPU-RAM: http://pjreddie.com/media/files/tiny-yolo.weights |
| | | * `tiny-yolo-voc.cfg` (60 MB VOC-model) - require 1 GB GPU-RAM: http://pjreddie.com/media/files/tiny-yolo-voc.weights |
| | | |
| | | Put it near compiled: darknet.exe |
| | | |
| | | You can get cfg-files by path: `darknet/cfg/` |
| | | |
| | | ##### Examples of results: |
| | | |
| | | [](https://www.youtube.com/watch?v=VOC3huqHrss "Everything Is AWESOME") |
| | | |
| | | Others: https://www.youtube.com/channel/UC7ev3hNVkx4DzZ3LO19oebg |
| | | |
| | | ### How to use: |
| | | |
| | | ##### Example of usage in cmd-files from `build\darknet\x64\`: |
| | | |
| | | * `darknet_voc.cmd` - initialization with 256 MB VOC-model yolo-voc.weights & yolo-voc.cfg and waiting for entering the name of the image file |
| | | * `darknet_demo_voc.cmd` - initialization with 256 MB VOC-model yolo-voc.weights & yolo-voc.cfg and play your video file which you must rename to: test.mp4, and store result to: test_dnn_out.avi |
| | | * `darknet_net_cam_voc.cmd` - initialization with 256 MB VOC-model, play video from network video-camera mjpeg-stream (also from you phone) and store result to: test_dnn_out.avi |
| | | * `darknet_web_cam_voc.cmd` - initialization with 256 MB VOC-model, play video from Web-Camera number #0 and store result to: test_dnn_out.avi |
| | | |
| | | ##### How to use on the command line: |
| | | * 256 MB COCO-model - image: `darknet.exe detector test data/coco.data yolo.cfg yolo.weights -i 0 -thresh 0.2` |
| | | * Alternative method 256 MB COCO-model - image: `darknet.exe detect yolo.cfg yolo.weights -i 0 -thresh 0.2` |
| | | * 256 MB VOC-model - image: `darknet.exe detector test data/voc.data yolo-voc.cfg yolo-voc.weights -i 0` |
| | | * 256 MB COCO-model - video: `darknet.exe detector demo data/coco.data yolo.cfg yolo.weights test.mp4 -i 0` |
| | | * 256 MB VOC-model - video: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights test.mp4 -i 0` |
| | | * Alternative method 256 MB VOC-model - video: `darknet.exe yolo demo yolo-voc.cfg yolo-voc.weights test.mp4 -i 0` |
| | | * 60 MB VOC-model for video: `darknet.exe detector demo data/voc.data tiny-yolo-voc.cfg tiny-yolo-voc.weights test.mp4 -i 0` |
| | | * 256 MB COCO-model for net-videocam - Smart WebCam: `darknet.exe detector demo data/coco.data yolo.cfg yolo.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0` |
| | | * 256 MB VOC-model for net-videocam - Smart WebCam: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0` |
| | | * 256 MB VOC-model - WebCamera #0: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights -c 0` |
| | | |
| | | ##### For using network video-camera mjpeg-stream with any Android smartphone: |
| | | |
| | | 1. Download for Android phone mjpeg-stream soft: IP Webcam / Smart WebCam |
| | | |
| | | |
| | | * Smart WebCam - preferably: https://play.google.com/store/apps/details?id=com.acontech.android.SmartWebCam2 |
| | | * IP Webcam: https://play.google.com/store/apps/details?id=com.pas.webcam |
| | | |
| | | 2. Connect your Android phone to computer by WiFi (through a WiFi-router) or USB |
| | | 3. Start Smart WebCam on your phone |
| | | 4. Replace the address below, on shown in the phone application (Smart WebCam) and launch: |
| | | |
| | | |
| | | * 256 MB COCO-model: `darknet.exe detector demo data/coco.data yolo.cfg yolo.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0` |
| | | * 256 MB VOC-model: `darknet.exe detector demo data/voc.data yolo-voc.cfg yolo-voc.weights http://192.168.0.80:8080/video?dummy=param.mjpg -i 0` |
| | | |
| | | |
| | | ### How to compile: |
| | | |
| | | 1. If you have MSVS 2015, CUDA 8.0 and OpenCV 2.4.9 (with paths: `C:\opencv_2.4.9\opencv\build\include` & `C:\opencv_2.4.9\opencv\build\x64\vc12\lib` or `vc14\lib`), then start MSVS, open `build\darknet\darknet.sln`, set **x64** and **Release**, and do the: Build -> Build darknet |
| | | |
| | | 2. If you have other version of CUDA (not 8.0) then open `build\darknet\darknet.vcxproj` by using Notepad, find 2 places with "CUDA 8.0" and change it to your CUDA-version, then do step 1 |
| | | |
| | | 3. If you have other version of OpenCV 2.4.x (not 2.4.9) then you should change pathes after `\darknet.sln` is opened |
| | | |
| | | 3.1 (right click on project) -> properties -> C/C++ -> General -> Additional Include Directories |
| | | |
| | | 3.2 (right click on project) -> properties -> Linker -> General -> Additional Library Directories |
| | | |
| | | 3.3 Open file: `\src\yolo.c` and change 3 lines to your OpenCV-version - `249` (for 2.4.9), `2413` (for 2.4.13), ... : |
| | | |
| | | * `#pragma comment(lib, "opencv_core249.lib")` |
| | | * `#pragma comment(lib, "opencv_imgproc249.lib")` |
| | | * `#pragma comment(lib, "opencv_highgui249.lib")` |
| | | |
| | | |
| | | 4. If you have other version of OpenCV 3.x (not 2.4.x) then you should change many places in code by yourself. |
| | | |
| | | 5. If you want to build with CUDNN to speed up then: |
| | | |
| | | * download and install CUDNN: https://developer.nvidia.com/cudnn |
| | | |
| | | * add Windows system variable `cudnn` with path to CUDNN: https://hsto.org/files/a49/3dc/fc4/a493dcfc4bd34a1295fd15e0e2e01f26.jpg |
| | | |
| | | * open `\darknet.sln` -> (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions, and add at the beginning of line: `CUDNN;` |
| | | |
| | | ### How to compile (custom): |
| | | |
| | | Also, you can to create your own `darknet.sln` & `darknet.vcxproj`, this example for CUDA 8.0 and OpenCV 2.4.9 |
| | | |
| | | Then add to your created project: |
| | | - (right click on project) -> properties -> C/C++ -> General -> Additional Include Directories, put here: |
| | | |
| | | `C:\opencv_2.4.9\opencv\build\include;..\..\3rdparty\include;%(AdditionalIncludeDirectories);$(CudaToolkitIncludeDir);$(cudnn)\include` |
| | | - (right click on project) -> Build dependecies -> Build Customizations -> set check on CUDA 8.0 or what version you have - for example as here: http://devblogs.nvidia.com/parallelforall/wp-content/uploads/2015/01/VS2013-R-5.jpg |
| | | - add to project all .c & .cu files from `\src` |
| | | - (right click on project) -> properties -> Linker -> General -> Additional Library Directories, put here: |
| | | |
| | | `C:\opencv_2.4.9\opencv\build\x64\vc12\lib;$(CUDA_PATH)lib\$(PlatformName);$(cudnn)\lib\x64;%(AdditionalLibraryDirectories)` |
| | | - (right click on project) -> properties -> Linker -> Input -> Additional dependecies, put here: |
| | | |
| | | `..\..\3rdparty\lib\x64\pthreadVC2.lib;cublas.lib;curand.lib;cudart.lib;cudnn.lib;%(AdditionalDependencies)` |
| | | - (right click on project) -> properties -> C/C++ -> Preprocessor -> Preprocessor Definitions |
| | | |
| | | - open file: `\src\yolo.c` and change 3 lines to your OpenCV-version - `249` (for 2.4.9), `2413` (for 2.4.13), ... : |
| | | |
| | | * `#pragma comment(lib, "opencv_core249.lib")` |
| | | * `#pragma comment(lib, "opencv_imgproc249.lib")` |
| | | * `#pragma comment(lib, "opencv_highgui249.lib")` |
| | | |
| | | `OPENCV;_TIMESPEC_DEFINED;_CRT_SECURE_NO_WARNINGS;GPU;WIN32;NDEBUG;_CONSOLE;_LIB;%(PreprocessorDefinitions)` |
| | | - compile to .exe (X64 & Release) and put .dll-s near with .exe: |
| | | |
| | | `pthreadVC2.dll, pthreadGC2.dll` from \3rdparty\dll\x64 |
| | | |
| | | `cusolver64_80.dll, curand64_80.dll, cudart64_80.dll, cublas64_80.dll` - 80 for CUDA 8.0 or your version, from C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v8.0\bin |
| | | |
| | | |
| | | ## How to train (Pascal VOC Data): |
| | | |
| | | 1. Download pre-trained weights for the convolutional layers (76 MB): http://pjreddie.com/media/files/darknet19_448.conv.23 and put to the directory `build\darknet\x64` |
| | | |
| | | 2. Download The Pascal VOC Data and unpack it to directory `build\darknet\x64\data\voc` will be created dir `build\darknet\x64\data\voc\VOCdevkit\`: |
| | | * http://pjreddie.com/media/files/VOCtrainval_11-May-2012.tar |
| | | * http://pjreddie.com/media/files/VOCtrainval_06-Nov-2007.tar |
| | | * http://pjreddie.com/media/files/VOCtest_06-Nov-2007.tar |
| | | |
| | | 2.1 Download file `voc_label.py` to dir `build\darknet\x64\data\voc`: http://pjreddie.com/media/files/voc_label.py |
| | | |
| | | 3. Download and install Python for Windows: https://www.python.org/ftp/python/3.5.2/python-3.5.2-amd64.exe |
| | | |
| | | 4. Run command: `python build\darknet\x64\data\voc\voc_label.py` (to generate files: 2007_test.txt, 2007_train.txt, 2007_val.txt, 2012_train.txt, 2012_val.txt) |
| | | |
| | | 5. Run command: `type 2007_train.txt 2007_val.txt 2012_*.txt > train.txt` |
| | | |
| | | 6. Start training by using `train_voc.cmd` or by using the command line: `darknet.exe detector train data/voc.data yolo-voc.cfg darknet19_448.conv.23` |
| | | |
| | | If required change pathes in the file `build\darknet\x64\data\voc.data` |
| | | |
| | | More information about training by the link: http://pjreddie.com/darknet/yolo/#train-voc |
| | | |
| | | ## How to train with multi-GPU: |
| | | |
| | | 1. Train it first on 1 GPU for like 1000 iterations: `darknet.exe detector train data/voc.data yolo-voc.cfg darknet19_448.conv.23` |
| | | |
| | | 2. Then stop and by using partially-trained model `/backup/yolo-voc_1000.weights` run training with multigpu (up to 4 GPUs): `darknet.exe detector train data/voc.data yolo-voc.cfg yolo-voc_1000.weights -gpus 0,1,2,3` |
| | | |
| | | https://groups.google.com/d/msg/darknet/NbJqonJBTSY/Te5PfIpuCAAJ |
| | | |
| | | ## How to train (to detect your custom objects): |
| | | |
| | | 1. Create file `yolo-obj.cfg` with the same content as in `yolo-voc.cfg` (or copy `yolo-voc.cfg` to `yolo-obj.cfg)` and: |
| | | |
| | | * change line `classes=20` to your number of objects |
| | | * change line #224 from [`filters=125`](https://github.com/AlexeyAB/darknet/blob/master/cfg/yolo-voc.cfg#L224) to `filters=(classes + 5)*5` (generally this depends on the `num` and `coords`, i.e. equal to `(classes + coords + 1)*num`) |
| | | |
| | | For example, for 2 objects, your file `yolo-obj.cfg` should differ from `yolo-voc.cfg` in such lines: |
| | | |
| | | ``` |
| | | [convolutional] |
| | | filters=35 |
| | | |
| | | [region] |
| | | classes=2 |
| | | ``` |
| | | |
| | | 2. Create file `obj.names` in the directory `build\darknet\x64\data\`, with objects names - each in new line |
| | | |
| | | 3. Create file `obj.data` in the directory `build\darknet\x64\data\`, containing (where **classes = number of objects**): |
| | | |
| | | ``` |
| | | classes= 2 |
| | | train = train.txt |
| | | valid = test.txt |
| | | names = obj.names |
| | | backup = backup/ |
| | | ``` |
| | | |
| | | 4. Put image-files (.jpg) of your objects in the directory `build\darknet\x64\data\obj\` |
| | | |
| | | 5. Create `.txt`-file for each `.jpg`-image-file - in the same directory and with the same name, but with `.txt`-extension, and put to file: object number and object coordinates on this image, for each object in new line: `<object-class> <x> <y> <width> <height>` |
| | | |
| | | Where: |
| | | * `<object-class>` - integer number of object from `0` to `(classes-1)` |
| | | * `<x> <y> <width> <height>` - float values relative to width and height of image, it can be equal from 0.0 to 1.0 |
| | | * for example: `<x> = <absolute_x> / <image_width>` or `<height> = <absolute_height> / <image_height>` |
| | | * atention: `<x> <y>` - are center of rectangle (are not top-left corner) |
| | | |
| | | For example for `img1.jpg` you should create `img1.txt` containing: |
| | | |
| | | ``` |
| | | 1 0.716797 0.395833 0.216406 0.147222 |
| | | 0 0.687109 0.379167 0.255469 0.158333 |
| | | 1 0.420312 0.395833 0.140625 0.166667 |
| | | ``` |
| | | |
| | | 6. Create file `train.txt` in directory `build\darknet\x64\data\`, with filenames of your images, each filename in new line, with path relative to `darknet.exe`, for example containing: |
| | | |
| | | ``` |
| | | data/obj/img1.jpg |
| | | data/obj/img2.jpg |
| | | data/obj/img3.jpg |
| | | ``` |
| | | |
| | | 7. Download pre-trained weights for the convolutional layers (76 MB): http://pjreddie.com/media/files/darknet19_448.conv.23 and put to the directory `build\darknet\x64` |
| | | |
| | | 8. Start training by using the command line: `darknet.exe detector train data/obj.data yolo-obj.cfg darknet19_448.conv.23` |
| | | |
| | | 9. After training is complete - get result `yolo-obj_final.weights` from path `build\darknet\x64\backup\` |
| | | |
| | | * After each 1000 iterations you can stop and later start training from this point. For example, after 2000 iterations you can stop training, and later just copy `yolo-obj_2000.weights` from `build\darknet\x64\backup\` to `build\darknet\x64\` and start training using: `darknet.exe detector train data/obj.data yolo-obj.cfg yolo-obj_2000.weights` |
| | | |
| | | * Also you can get result earlier than all 45000 iterations, for example, usually sufficient 2000 iterations for each class(object). I.e. for 6 classes to avoid overfitting - you can stop training after 12000 iterations and use `yolo-obj_12000.weights` to detection. |
| | | |
| | | ### Custom object detection: |
| | | |
| | | Example of custom object detection: `darknet.exe detector test data/obj.data yolo-obj.cfg yolo-obj_3000.weights` |
| | | |
| | | |  |  | |
| | | | hash_size | elapsed_time (ms) | |
| | | |---|---| |
| | | | 8 | 9.9 | |
| | | | 16 | 11.54 | |
| | | | 32 | 18.55 | |
| | | | 64 | 45.79 | |
| | | |
| | | ## How to mark bounded boxes of objects and create annotation files: |
| | | Furthermore, turns out that hash size of 16 is sufficient enough to distinguish each cards in most of the case. Halving the hash size further knocked down 7-9ms, as you only need to compare about quarter of the bits compared to hash size of 32. |
| | | |
| | | Here you can find repository with GUI-software for marking bounded boxes of objects and generating annotation files for Yolo v2: https://github.com/AlexeyAB/Yolo_mark |
| | | ------------------ |
| | | |
| | | With example of: `train.txt`, `obj.names`, `obj.data`, `yolo-obj.cfg`, `air`1-6`.txt`, `bird`1-4`.txt` for 2 classes of objects (air, bird) and `train_obj.cmd` with example how to train this image-set with Yolo v2 |
| | | The other bottleneck is a something unfortunate. Turns out feeding the image through YOLO network consumes a constant 50 - 60ms per frame. Remember the processing time of (65+50)ms above? Yeah, that's where the 65ms is coming from. |
| | | |
| | | As hilarious and ironic it is, I would have to remove the network entirely to speed up the program... |
| | | **(((Facepalm into another dimension)))** |
| | | The program still works by replacing neural net with contour detection |
| | | |
| | | ## Oct 13th, 2018 |
| | | |
| | | Cleaning up everything to wrap up this project for now. If I can figure out how to move from bounding boxes of overlapping cards [(notes)](https://github.com/hj3yoo/mtg_card_detector#oct-4th-2018), I may come back to upgrade the project in the future. If you have any suggestion regarding this issue, please don't hesitate to let me know. |
| | | |
| | | Thank you for reading all the way up to here. Hope this project has helped you in some way. |