| | |
| | | |
| | | 5. Run command: `type 2007_train.txt 2007_val.txt 2012_*.txt > train.txt` |
| | | |
| | | 6. Set `batch=64` and `subdivisions=8` in the file `yolo-voc.cfg`: [link](https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/yolo-voc.cfg#L3) |
| | | 6. Set `batch=64` and `subdivisions=8` in the file `yolo-voc.2.0.cfg`: [link](https://github.com/AlexeyAB/darknet/blob/master/build/darknet/x64/yolo-voc.cfg#L3) |
| | | |
| | | 7. Start training by using `train_voc.cmd` or by using the command line: `darknet.exe detector train data/voc.data yolo-voc.cfg darknet19_448.conv.23` |
| | | 7. Start training by using `train_voc.cmd` or by using the command line: `darknet.exe detector train data/voc.data yolo-voc.2.0.cfg darknet19_448.conv.23` |
| | | |
| | | If required change pathes in the file `build\darknet\x64\data\voc.data` |
| | | |
| | |
| | | |
| | | ## How to train with multi-GPU: |
| | | |
| | | 1. Train it first on 1 GPU for like 1000 iterations: `darknet.exe detector train data/voc.data yolo-voc.cfg darknet19_448.conv.23` |
| | | 1. Train it first on 1 GPU for like 1000 iterations: `darknet.exe detector train data/voc.data yolo-voc.2.0.cfg darknet19_448.conv.23` |
| | | |
| | | 2. Then stop and by using partially-trained model `/backup/yolo-voc_1000.weights` run training with multigpu (up to 4 GPUs): `darknet.exe detector train data/voc.data yolo-voc.cfg yolo-voc_1000.weights -gpus 0,1,2,3` |
| | | 2. Then stop and by using partially-trained model `/backup/yolo-voc_1000.weights` run training with multigpu (up to 4 GPUs): `darknet.exe detector train data/voc.data yolo-voc.2.0.cfg yolo-voc_1000.weights -gpus 0,1,2,3` |
| | | |
| | | https://groups.google.com/d/msg/darknet/NbJqonJBTSY/Te5PfIpuCAAJ |
| | | |
| | |
| | | * change line `classes=20` to your number of objects |
| | | * change line #237 from [`filters=125`](https://github.com/AlexeyAB/darknet/blob/master/cfg/yolo-voc.cfg#L237) to `filters=(classes + 5)*5` (generally this depends on the `num` and `coords`, i.e. equal to `(classes + coords + 1)*num`) |
| | | |
| | | For example, for 2 objects, your file `yolo-obj.cfg` should differ from `yolo-voc.cfg` in such lines: |
| | | For example, for 2 objects, your file `yolo-obj.cfg` should differ from `yolo-voc.2.0.cfg` in such lines: |
| | | |
| | | ``` |
| | | [convolutional] |
| | |
| | | |
| | |  |
| | | |
| | | To get weights from Early Stopping Point: |
| | | |
| | | 2.1. At first, in your file `obj.data` you must specify the path to the validation dataset `valid = valid.txt` (format of `valid.txt` as in `train.txt`), and if you haven't validation images, just copy `data\train.txt` to `data\valid.txt`. |
| | | |
| | | 2.2 If training is stopped after 9000 iterations, to validate some of previous weights use this commands: |
| | |
| | | * **IOU** - the bigger, the better (says about accuracy) - **better to use** |
| | | * **Recall** - the bigger, the better (says about accuracy) - actually Yolo calculates true positives, so it shouldn't be used |
| | | |
| | | For example, **bigger IUO** gives weights `yolo-obj_8000.weights` - then **use this weights for detection**. |
| | | For example, **bigger IOU** gives weights `yolo-obj_8000.weights` - then **use this weights for detection**. |
| | | |
| | | |
| | |  |