Annotate your own data to train Mask R-CNN

** Instance Segmentation ** (object detection + segmentation) --** Annotate your own data ** -** Learn Mask R-CNN ** I did that, but I had a hard time because I couldn't find any other useful articles. It's just a memo, but I'll share the steps I took.

The explanation about Mask R-CNN is omitted. Please refer to the following. [Introduction to object detection using the latest Region CNN (R-CNN) ~ What is object detection? R-CNN, Fast R-CNN, Faster R-CNN, Mask R-CNN ~](https://qiita.com / arutema47 / items / 8ff629a1516f7fd485f9)

The Mask R-CNN implementation uses the following repositories. https://github.com/facebookresearch/maskrcnn-benchmark

Data creation with Labelme

https://github.com/wkentaro/labelme It's basically the same as README.md above I will summarize below

Installation

You can also enter it with $ pip install label me, but since you will use a script for data conversion later, do git clone

.sh


$ git clone https://github.com/wkentaro/labelme.git
$ cd labelme
$ pip install -e .

Start-up

Create a class.txt that lists the class names with line breaks. Add __ignore__ to the first line as you may get an error when converting the data. Example:

classes.txt


__ignore__
person
bicycle
car
...

Start labelme with the following command

.sh


$ labelme <Data folder path> --labels <classes.txt path> --nodata

Annotation

The following site will be helpful. It's very easy to annotate. Semantic segmentation annotation tool labelme

Data format conversion

Converts the created annotation data and original image for Mask R-CNN.

.sh


$ cd labelme/examplts/instance_segmentation
$ ./labelme2coco.py <Data folder path> <Directory name to create> --labels <classes.txt>

by this <Directory name to be created> / JPEGImages ・ ・ ・ Directory containing images <Directory name to be created> /annotations.json ・ ・ ・ json file with annotation information will be created

Train maskrcnn

Installation

https://github.com/facebookresearch/maskrcnn-benchmark Install above according to README.md

The following articles will be helpful. [Own data training with Pytorch1.1 + MaskRCNN (1)](https://qiita.com/kuroyagi/items/e66ca85f8d118c07eb95#7-%E8%A8%93%E7%B7%B4%E3%81%97% E3% 81% 9F% E7% B5% 90% E6% 9E% 9C% E3% 82% 92% E4% BD% BF% E3% 81% A3% E3% 81% A6% E8% 87% AA% E5% 88% 86% E3% 81% A7% E6% 8E% A8% E8% AB% 96)

Data placement

Place the data in a position where maskrcnn can read it

Created above JPEGImages, annotations.json To maskrcnn_benchmark / datasets / <new directory name> / set on

Data registration

Add the following to DATASETS in maskrcnn_benchmark / config / paths_catalog.py

Make sure to include ** COCO ** in ** "new data name" ** to read that it is in COCO format.

.json


"New data name" : {
    "img_dir" : "<New directory name>"
    "ann_file" : "<New directory name>/annotations.json"
} 

Learning execution

With the above process, register the data for ** learning ** and ** test **.

You should be able to start learning at:

.sh


$ python tools/train_net.py --config-file configs/e2e_mask_rcnn_R_50_FPN_1x.yaml DATASETS.TRAIN "(\"<New data name(For learning)>\",)" DATASETS.TEST "(\"<New data name(for test)>\",)"

By specifying --config-file fromconfigs /, you can specify config such as network structure. If you want to change the ** learning rate ** or ** batch size **, you can change it by writing it at runtime.

Example:

$ python tools/train_net.py --config-file configs/e2e_mask_rcnn_R_50_FPN_1x.yaml DATASETS.TRAIN "(\"<New data name(For learning)>\",)" DATASETS.TEST "(\"<New data name(for test)>\",)" SOLVER.BASE_LR 0.0025 SOLVER.IMS_PER_BATCH 2 TEST.IMS_PER_BATCH 1

SOLVER.BASE_LR: Learning rate at the start of learning (decreasing) SOLVER.IMS_PER_BATCH: Batch size during learning TEST.IMS_PER_BATCH: Batch size at test

All parameters maskrcnn_benchmark/config/defaults.py It is written in.

important point

Another object with the same label is recognized as one object

When converting the data of labelme, it is recognized as one object by default. If you want them to be recognized separately, modify labelme2coco.py according to the following. https://github.com/wkentaro/labelme/issues/419#issuecomment-511853597

Segmentation is done on only one object

It seems that the version of Pytorch creates all labels for one object during learning.

maskrcnn_benchmark/data/datasets/coco.py#L94 Because the behavior of is sometimes unexpected target = target.clip_to_image (remove_empty = True) target = target.clip_to_image(remove_empty=False) Please change to.

That's it. Thank you very much!

Recommended Posts

Annotate your own data to train Mask R-CNN
Migrate your own CMS data to WordPress
Train Stanford NER Tagger with your own data
How to create your own Transform
Bridge ROS to your own protocol
Train UGATIT with your own dataset
Add your own content view to mitmproxy
To import your own module with jupyter
How to install your own (root) CA
How to access data with object ['key'] for your own Python class
[For recording] Keras image system Part 1: How to create your own data set?
Try to make your own AWS-SDK with bash
How to define your own target in Sage
Steps to install your own library with pip
Memo to create your own Box with Pepper's Python
Create your own Big Data in Python for validation
[Introduction to Udemy Python 3 + Application] 66. Creating your own exceptions
Try to improve your own intro quiz in Python
Wagtail Recommendation (5) Let's add your own block to StreamField
Try to put LED in your own PC (slightly)
[Road to intermediate Python] Define in in your own class
I tried to interpolate Mask R-CNN with Optical Flow