[For recording] Keras image system Part 1: How to create your own data set?

Nice to meet you.

Starting this year, I've been studying to actually write python and have been using Keras as a framework for Deep Learning because of its ease of use. I'm thinking of studying PyTorch in the future, but before that, I thought that I could share the Keras (python) image system knowledge that I had struggled to remember in the project without having any experience in programming, so I made my first post. It was.

When I looked it up, it was difficult to go around various sites as a result (I couldn't read the meaning written on this Qiita and other commentary sites because I didn't have enough basic knowledge). (MNIST is already good .. It's my own data set ...) I will also summarize it as a record for myself to look back on later.

This time, the first time, the basics of how to capture images and how to create your own data set? I will summarize.

I hope that someday this will be useful for beginners who no one knows ... (I intend to make more Japanese comments than other similar articles)

In addition, since it is the first post of Qiita, I guess the appearance. .. .. I will study gradually, so please be patient.


Part 1 (this article) Image capture How to make your own dataset

Part 2 (planned to write next time) Let's put the created data set in NN


1. Image loading method for Keras Load_img

In python, there is also a method to import images using pillow (PIL) or OpenCV, but since it is a Keras article, I will focus on load_img here. Here, as an example, how do you import your image (A.png) when it is placed in the same hierarchy (directory) as the program data (.py)? To explain.

python


#The magic of outputting images with matplotlib and jupyter Notebook used to display images
import matplotlib.pyplot as plt
%matplotlib inline

#A library that can easily handle images with Keras keras.preprocessing.image
from keras.preprocessing.image import load_img

#As it is(In color)Read(The original image size of Irasutoya is(646,839)3D png color image)
#.Since there is an image in the same hierarchy as py, A as it is.Can be read with png
img1 = load_img('A.png')

#grayscale(1D)When reading with
img2 = load_img('A.png', grayscale=True)

#When resizing and reading
img_shape = (256, 256, 3) #Try resizing to a color image of 256 height and width
img3 = load_img('A.png', grayscale=False, target_size=img_shape)

Next, let's display each image using matplotlib.

python


plt.imshow(img1) #img2 in place of img1,Swap 3 and try

いらすとや.PNG

how about that? You can see that it is displayed properly. By the way, in general, the following processing is required to input an image to an NN (neural network).

python


#Resize and load
img_shape = (256, 256, 3) #256 color images in height and width
img = load_img('A.png', grayscale=False, target_size=img_shape)

#Convert to numpy array type for inclusion in neural network
img_array = img_to_array(img)

#Divide by 255 and normalize
img_array /= 255
#img_array = img_array /255 is OK

Now you are ready to input to NN. However, you don't have to enter just one ... Usually, you learn more than one. .. .. So how do you capture multiple images instead of a single image this time?

python


import numpy as np
import glob
from keras.preprocessing.image import load_img, img_to_array

#directory(path)name
dir_name = 'picture'
#Image extension
file_type = 'png'

#glob.Get path information for all images with glob
#/*.png means all extensions of png
img_list_path = glob.glob('./' + dir_name + '/*.' + file_type ) 
#img_list_path = glob.glob('./picture./*.png' )But of course it is possible

#Create an empty list to store the loaded images
img_list = []
img_shape = (256, 256, 3)

#Extract the path information one by one with the for statement and load it_Read with img and store in the list
for img in img_list_path:
    temp_img = load_img(img, grayscale=False, target_size=img_shape)#Image reading in PIL format
    temp_img_array = img_to_array(temp_img) / 255 #PIL ⇒ array+Normalize by dividing by 255
    img_list.append(temp_img_array) #Add to the prepared list

#As a learning image, further arrange the list and(n, width, haight, ch)Convert to the shape of
x_train = np.array(img_list)

Check the shape of this list as a trial, and display the 0th image as a trial.

python


print(x_train.shape)

plt.imshow(x_train[0]) #0th in the list(1st)Try to access and display the elements of

I don't show it here, but I think you can see the shape of (n, 256, 256, 3) and the 0th image properly. This completes the preparation of the input image.

Originally, it is necessary not only to read the image but also to give a correct label at the same time. There are various ways to do this, but if all the images in img_list_path have the same classification tag, you can add them in the same for statement.

python


img_list = []
y = []
img_shape = (256, 256, 3)

for img in img_list_path:
    temp_img = load_img(img, grayscale=False, target_size=img_shape)
    temp_img_array = img_to_array(temp_img) / 255
    img_list.append(temp_img_array) 
    y.append('Irasutoya')#img_list_pathにある画像は全部Irasutoyaなので、分類タグを加える

#x_train and y_trains are paired in order
x_train = np.array(img_list)
y_train = np.array(y)

2. ImageDataGenerator that handles a large number of images that cannot fit in memory

By the way, since the learning image is stored as a list in the above, there is a case where a problem occurs when trying to use it as it is for learning. The problem is that if you save a huge number of images as a list, the memory of your PC will become full and you will not be able to learn in the first place.

For the first trial or a small amount, you can do it with the above list, but in practice, for example, learn the size (256,256,3) of 10,000 sheets. .. When it comes out in practice normally, the PC starts to growl and growl, which is a problem (laugh)

And, Keras's Imade Data Generator plays an active role in such cases. As a preliminary preparation, it is necessary to divide the folders according to the learning classification.

The folder structure is as follows, and the program used as an example is executed in the same hierarchy as the picture folder. train is for learning. val will contain an image for verification.

python


'''
picture -╷- train -╷- Apple(Classification 1) -╷- **.png #For learning
         ╎         ╎               ╎  (Many images ... omitted)
         ╎         ╎               ╵- **.png
         ╎         ╵- Mango(Classification 2) - (Abbreviation)
         ╎         
         ╵-  val  -╷- Apple(Classification 1) - (Abbreviation) #For learning verification
                   ╵- Mango(Classification 2) - (Abbreviation)
'''

Now let's take a look at creating a dataset to classify apple and mango images.

python


from keras.preprocessing.image import ImageDataGenerator

#Setting
classes = ['Apple', 'Mango']#If you want to learn to classify apples and mangoes
train_data_dir = './picture/train'#Specify the parent folder for classification(Apple,Upper folder of Mango)
img_height = 100
img_width = 100
#Batch size(The number of sheets to be learned by NN at one time. 2 to the nth power is common)
batch_size = 16

#Create training data
#Divide by 255 and scale
train_datagen = ImageDataGenerator(rescale = 1.0 / 255) #You can also set padding, but this time it will be omitted.

#Set the learning generator.
#Roughly speaking, generator is every time(Here for each batch size)To generate an image
train_generator = train_datagen.flow_from_directory(
    train_data_dir,
    target_size = (img_height, img_width), #No dimensions required
    color_mode = 'rgb', #If gray,'grayscale'Enter
    classes = classes, 
    class_mode = 'binary', #Since there are two, it is binary. If there are 3 or more'categorical'
    batch_size = batch_size,
    shuffle = True)

This completes the creation of the training dataset. Is this easier than the one on the list above? ?? This will load the images in batches so you don't have to puncture your memory. However, if you are worried that you can read the image because it is really like this, let's output the image of the contents of the generator once

python


#Set the batch size to be taken out for display to 1.
batch_size_show = 1

#Prepare a generator different from the previous one for display
train_generator_show = train_datagen.flow_from_directory(
    train_data_dir,
    target_size = (img_height, img_width), #No dimensions required
    color_mode = 'rgb', #If gray,'grayscale'Enter
    classes = classes, 
    class_mode = 'binary', #Since there are two, it is binary. If there are 3 or more'categorical'
    batch_size = batch_size_show,
    shuffle = True)

#Prepare a list to store images and labels for display
imgs = []
labbels = []

#For 100, specify the number of sheets you want to display. You may set the total number of images
for i in range(100):
    x,y = train_generator_show.next() #next()Extract the elements of the generator in order with
    
    imgs.append(x[0])
    labbels.append(y)

Next, let's actually display the image and label

#Confirmation of classification of generator class
print(train_generator_show.class_indices)

#Display settings
fig = plt.figure(figsize=(12,12))
fig.subplots_adjust(hspace=0.5, wspace=0.5)
row = 10
col = 10

for i, img in enumerate(imgs):#Index number,Elements can be obtained with enumerate
    plot_num = i+1
    plt.subplot(row, col, plot_num,xticks=[], yticks=[])
    plt.imshow(img)
    plt.title('%d' % labbels[i])
plt.show():

apple,mango.PNG

With this, it seems that the image and classification tag are likely to be correct and the learning data can be set, so After that, prepare the validation data set in the same way and prepare for learning. (Approach the val folder in the same way)

python



#val setting
classes = ['Apple', 'Mango']
val_data_dir = './picture/val'
img_height = 100
img_width = 100
batch_size = 16

#Create validation data
val_datagen = ImageDataGenerator(rescale = 1.0 / 255)

#Set the validation generator.
val_generator = val_datagen.flow_from_directory(
    val_data_dir,
    target_size = (img_height, img_width),
    color_mode = 'rgb',
    classes = classes, 
    class_mode = 'binary',
    batch_size = batch_size,
    shuffle = True)

This time it ends here. I may add it later

Recommended Posts

[For recording] Keras image system Part 1: How to create your own data set?
[For recording] Keras image system Part 2: Make judgment by CNN using your own data set
How to create your own Transform
How to access data with object ['key'] for your own Python class
Create your own Big Data in Python for validation
Try docker: Create your own container image for your Python web app
How to make a face image data set used in machine learning (3: Face image generation from candidate images Part 1)
How to create * .spec files for pyinstaller.
Migrate your own CMS data to WordPress
How to install your own (root) CA
[Development environment] How to create a data set close to the production DB
How to use "deque" for Python data
[Introduction to pytorch-lightning] How to use torchvision.transforms and how to freely create your own dataset ♬
How to create a face image data set used in machine learning (1: Acquire candidate images using WebAPI service)
[Machine learning] Create a machine learning model by performing transfer learning with your own data set
How to create sample CSV data with hypothesis
How to define your own target in Sage
How to use data analysis tools for beginners
How to create data to put in CNN (Chainer)
How to create a shortcut command for LINUX
Annotate your own data to train Mask R-CNN
How to set CPU affinity for process threads
Memo to create your own Box with Pepper's Python
[For beginners] How to study Python3 data analysis exam
How to scrape image data from flickr with python
How to quickly create array sample data during coding
[Go] How to create a custom error for Sentry
How to make unit tests Part.2 Class design for tests
How to create a local repository for Linux OS
How to create an image uploader in Bottle (Python)
How to set up and compile your Cython environment
How to set up WSL2 on Windows 10 and create a study environment for Linux commands
How to set up Ubuntu for Windows Subsystem for Linux 2 (WSL2)
[Ansible] How to call variables when creating your own module
How to implement 100 data science knocks for data science beginners (for windows10 Home)
(Note) How to pass the path of your own module
For beginners, how to deal with common errors in keras
How to create a SAS token for Azure IoT Hub
How to get an overview of your data in Pandas
How to create an ISO file (CD image) on Linux
Create data for series labeling (part of speech tagging) quickly
Try HeloWorld in your own language (with How to & code)