I heard that YOLO v5 has been released, so I tried it. https://github.com/ultralytics/yolov5/
I have picked up images for object detection from the following sites. https://public.roboflow.ai/object-detection/ There are various data, but since it is the corona period, I selected the data with and without mask.
The purpose is to detect objects with and without masks as shown in the figure.
First download the data. Access the following URL. https://public.roboflow.ai/object-detection/
Click ** Mask Wearing Dataset **.
Click ** 416x416-black-padding **.
Click ** Download ** in the upper right corner, select ** YOLOv5 Pytorch **, and click ** Continue ** to download.
Please download the set from the following. https://drive.google.com/drive/folders/1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J
See below for COCO datasets. COCO dataset
Please download ** YOLOv5.ipynb ** from GitHub below. https://github.com/yuomori0127/YOLOv5_mask_wearing_data
If you would like to see the code on Google Colab, click here (https://colab.research.google.com/drive/1TvyOG9sf-yx86SzBmYO8f1Y6wxVZEKDN?authuser=2#scrollTo=iCLXFYlQbPTM/)
The environment used ** Google Colab **. See the following article for how to use it. The server fee is free. Summary of how to use Google Colab
Put the following three in any folder of ** Goole Drive **. ・ Data downloaded in 2-1. ・ Pretrained model downloaded in 2-2. ・ ** YOLOv5.ipynb ** downloaded in 2-3.
Open ** YOLOv5.ipynb ** from ** Goole Drive ** in ** Google Colab **.
First of all, you need a GPU resemara.
Run ! Nvidia-smi
at the top (Shift + Enter) and resemble until you get a Tesla P100.
You can close within 5 times.
You can reset it from the following.
If you execute runtime
-> execute all processing
, it will execute everything,
** Probably the folder name or hierarchy does not match. ** **
The code is short and shouldn't be too hard, so please try to match it. I'm sorry.
Since it is short, I will explain the code one by one.
I'm checking the GPU.
!nvidia-smi
import.
from IPython.display import Image, clear_output # to display images
I'm mounting Google Drive.
from google.colab import drive
drive.mount('/content/drive')
Directory move
import os
os.chdir("./drive/My Drive/YOLOv5/")
Checking resources. This is mainly to check the RAM, but this time there is little data and it is 100 epoch so you don't have to worry too much.
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
I'm cloning yolov5.
!git clone https://github.com/ultralytics/yolov5
You have installed the packages required for execution.
!pip install -r yolov5/requirements.txt
I have ʻapex` installed. Learning will be faster.
!git clone https://github.com/NVIDIA/apex
!pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex
Preparing the tensorboard. It's cool.
# Start tensorboard
%load_ext tensorboard
%tensorboard --logdir runs
By the way, it was like this after learning 100 epoch. 100 isn't enough. (I forgot to take the image of loss.)
To learn The arguments are written by looking at train.py
, but I will explain them briefly here as well.
#--img: Image size
#--batch: batch size
#--epochs: Number of epochs.
#--data: Data definition file. It is created automatically when the data is downloaded. It's simple, so please take a look at the contents.
#--cfg: Model configuration file.
#--name: Model file name. The most accurate model after training is the best_mask_wearing.It is saved as pt.
#--weights: The original model for fine tuning. This time, we have specified the COCO pretrained model, but you can also specify the model you learned yourself.
!python yolov5/train.py --img 416 --batch 16 --epochs 100 --data data/data.yaml --cfg yolov5/models/yolov5x.yaml --name mask_wearing --weights yolov5_models/yolov5x.pt
I'm inferring. I didn't have a nice looking image in the test data, so this time I'm inferring the training data.
!python yolov5/detect.py --weights weights/best_mask_wearing.pt --img 416 --conf 0.4 --source data/train/images/
Image(filename='/content/drive/My Drive/YOLOv5/inference/output/0_10725_jpg.rf.99ff78c82dadd6d49408164489cb6582.jpg', width=600)
YOLOv5 was too easy to use.
You can learn just by specifying the data and executing train.py
.
Moreover, major Data Augmentation is done automatically,
The parameters have also been adjusted quite a bit.
He said that he felt a sense of crisis as to what kind of place the data analyst's skill might become. That's it.
Recommended Posts