[YOLO v5] Object detection for people who are masked and those who are not

I heard that YOLO v5 has been released, so I tried it. https://github.com/ultralytics/yolov5/

table of contents

  1. [Usage data and purpose](# 1-Usage data and purpose)
  2. [Implementation example](# 2-Implementation example)
  3. [Data preparation](# 2-1-Data preparation)
  4. [Download pretrained model of COCO data](# 2-2-Download pretrained model of coco data)
  5. [Download Code](# 2-3-Download Code)
  6. [Environmental preparation](# 2-4-Environmental preparation)
  7. [Code Execution](# 2-5-Code Execution)
  8. [Code description](# 2-6-Code description)
  9. [Finally](# 3-Finally)

1. Usage data and purpose

I have picked up images for object detection from the following sites. https://public.roboflow.ai/object-detection/ There are various data, but since it is the corona period, I selected the data with and without mask.

The purpose is to detect objects with and without masks as shown in the figure. before.PNG

2. Implementation example

2-1. Data preparation

First download the data. Access the following URL. https://public.roboflow.ai/object-detection/

Click ** Mask Wearing Dataset **. data1.png

Click ** 416x416-black-padding **. data2.png

Click ** Download ** in the upper right corner, select ** YOLOv5 Pytorch **, and click ** Continue ** to download.

2-2. Download pretrained model of COCO data

Please download the set from the following. https://drive.google.com/drive/folders/1Drs_Aiu7xx6S-ix95f9kNsA6ueKRpN2J

See below for COCO datasets. COCO dataset

2-3. Code download

Please download ** YOLOv5.ipynb ** from GitHub below. https://github.com/yuomori0127/YOLOv5_mask_wearing_data

If you would like to see the code on Google Colab, click here (https://colab.research.google.com/drive/1TvyOG9sf-yx86SzBmYO8f1Y6wxVZEKDN?authuser=2#scrollTo=iCLXFYlQbPTM/)

2-4. Environmental preparation

The environment used ** Google Colab **. See the following article for how to use it. The server fee is free. Summary of how to use Google Colab

Put the following three in any folder of ** Goole Drive **. ・ Data downloaded in 2-1. ・ Pretrained model downloaded in 2-2. ・ ** YOLOv5.ipynb ** downloaded in 2-3.

2-5. Code execution

Open ** YOLOv5.ipynb ** from ** Goole Drive ** in ** Google Colab **.

First of all, you need a GPU resemara. Run ! Nvidia-smi at the top (Shift + Enter) and resemble until you get a Tesla P100. You can close within 5 times. nvideasmi.png

You can reset it from the following. リセマラ.png

If you execute runtime-> execute all processing, it will execute everything, ** Probably the folder name or hierarchy does not match. ** ** The code is short and shouldn't be too hard, so please try to match it. I'm sorry.

2-6. Code description

Since it is short, I will explain the code one by one.

I'm checking the GPU.

!nvidia-smi

import.

from IPython.display import Image, clear_output  # to display images

I'm mounting Google Drive.

from google.colab import drive
drive.mount('/content/drive')

Directory move

import os
os.chdir("./drive/My Drive/YOLOv5/")

Checking resources. This is mainly to check the RAM, but this time there is little data and it is 100 epoch so you don't have to worry too much.

!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
    process = psutil.Process(os.getpid())
    print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
    print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()

I'm cloning yolov5.

!git clone https://github.com/ultralytics/yolov5

You have installed the packages required for execution.

!pip install -r yolov5/requirements.txt

I have ʻapex` installed. Learning will be faster.

!git clone https://github.com/NVIDIA/apex
!pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./apex

Preparing the tensorboard. It's cool.

# Start tensorboard
%load_ext tensorboard
%tensorboard --logdir runs

By the way, it was like this after learning 100 epoch. 100 isn't enough. (I forgot to take the image of loss.) tensorboard.png

To learn The arguments are written by looking at train.py, but I will explain them briefly here as well.

#--img: Image size
#--batch: batch size
#--epochs: Number of epochs.
#--data: Data definition file. It is created automatically when the data is downloaded. It's simple, so please take a look at the contents.
#--cfg: Model configuration file.
#--name: Model file name. The most accurate model after training is the best_mask_wearing.It is saved as pt.
#--weights: The original model for fine tuning. This time, we have specified the COCO pretrained model, but you can also specify the model you learned yourself.
!python yolov5/train.py --img 416 --batch 16 --epochs 100 --data data/data.yaml --cfg yolov5/models/yolov5x.yaml --name mask_wearing --weights yolov5_models/yolov5x.pt

I'm inferring. I didn't have a nice looking image in the test data, so this time I'm inferring the training data.

!python yolov5/detect.py --weights weights/best_mask_wearing.pt --img 416 --conf 0.4 --source data/train/images/
Image(filename='/content/drive/My Drive/YOLOv5/inference/output/0_10725_jpg.rf.99ff78c82dadd6d49408164489cb6582.jpg', width=600)

after.png

3. Finally

YOLOv5 was too easy to use. You can learn just by specifying the data and executing train.py. Moreover, major Data Augmentation is done automatically, The parameters have also been adjusted quite a bit. He said that he felt a sense of crisis as to what kind of place the data analyst's skill might become. That's it.

Recommended Posts

[YOLO v5] Object detection for people who are masked and those who are not
NumPy example collection for those who are not good at math
Java SE8 Gold measures (for those who are not good at it)
Infrastructure automation tool Ansible for people who are not good at Ruby
Tips for those who are wondering how to use is and == in Python
Developed a simple spreadsheet app for people who are not good at Excel
Environment construction procedure for those who are not familiar with the python version control system
[For those who want to use TPU] I tried using the Tensorflow Object Detection API 2
Explanation for those who are having trouble with "command not found" in rbenv or pyenv
For those who are having trouble drawing graphs in python
Image inspection machine for those who do not do their best
[AWS] A story that may be helpful for those who are new to Lambda-Python and DynamoDB
Commentary on unbiasedness and consistency for those who don't like it
[Comparison of PHP, Ruby, and Python description] For those who are wondering how the description method is different.