Easy image classification with TensorFlow

Introduction

When I touched TensorFlow (Keras) for the first time in a long time, I often forgot it, so I will leave it as a memo.

Learning flow

In TensorFlow (TF) ① Data set preparation ② Model preparation ③ Specify optimizer, loss, metrics with model.compile ④ Specify callbacks ⑤ Start learning with model.fit ⑥ Evaluate the test with model.evaluate ⑦ To get only the expected probability of the image, model.predict

It is a flow like. This is a relatively simple way to write. If you want to describe in a little more detail, please refer to the article I wrote before.

For those who want to start machine learning with TensorFlow2

① Data set preparation

Use flow_from_directory () to save an image in a directory and load it from there. Before that, use ImageDataGenerator to describe the contents of the augmentation.

train_datagen = ImageDataGenerator(
            rescale=1./255,
            zoom_range=0.2,
            horizontal_flip=True,
            rotation_range=20,
            width_shift_range=0.2,
            height_shift_range=0.2,
            )

val_datagen = ImageDataGenerator(rescale=1./255)


train_generator = train_datagen.flow_from_directory(
            TRAIN_DIR,
            target_size=(img_size, img_size),
            batch_size=batch_size,
            classes=classes,
            class_mode='categorical')


val_generator = val_datagen.flow_from_directory(
            VAL_DIR,
            target_size=(img_size,img_size),
            batch_size=batch_size,
            classes=classes,
            class_mode='categorical')

There are many augmentations that can be used as shown below, so decide according to your own task. If you want to know more, please check Official Document.

tf.keras.preprocessing.image.ImageDataGenerator(
    featurewise_center=False, samplewise_center=False,
    featurewise_std_normalization=False, samplewise_std_normalization=False,
    zca_whitening=False, zca_epsilon=1e-06, rotation_range=0, width_shift_range=0.0,
    height_shift_range=0.0, brightness_range=None, shear_range=0.0, zoom_range=0.0,
    channel_shift_range=0.0, fill_mode='nearest', cval=0.0,
    horizontal_flip=False, vertical_flip=False, rescale=None,
    preprocessing_function=None, data_format=None, validation_split=0.0, dtype=None
)

With flow_from_directory, an augmented image is generated for each batch size, specifying the directory path.


flow_from_directory(
    directory, target_size=(256, 256), color_mode='rgb', classes=None,
    class_mode='categorical', batch_size=32, shuffle=True, seed=None,
    save_to_dir=None, save_prefix='', save_format='png',
    follow_links=False, subset=None, interpolation='nearest'
)

class_mode: categorical, binary, sparse, input, None can be selected. The default is categorical. shuffle: Flase alphabetically save_to_dir: None or str (default: None). It saves augmented data by specifying a directory. Useful for visualization. save_prefix: str. Used for the file name of the saved image. (Valid if save_to_dir is set) save_format : png or jpeg (Default:'png') interpolation: nearest (default), bilinear, bicubic

② Model preparation

This time, we will use MobileNet v2 as an example. There are other trained models in tf.keras.applications.

IMG_SHAPE = (img_size, img_size, channels)
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
                                            include_top=False,
                                            weights='imagenet')

global_average_layer = tf.keras.layers.GlobalAveragePooling2D()
prediction_layer = tf.keras.layers.Dense(n_classes, activation='softmax')
model = tf.keras.Sequential([
            base_model,
            global_average_layer,
            prediction_layer
            ])

③ Specify optimizer, loss, metrics with model.compile

Set the learning details for the model.

model.compile(optimizer=optimizers.SGD(lr=0.0001, momentum=0.99, decay=0, nesterov=True),
            loss='categorical_crossentropy',
            metrics=['accuracy'])
compile(
    optimizer='rmsprop', loss=None, metrics=None, loss_weights=None,
    weighted_metrics=None, run_eagerly=None, steps_per_execution=None, **kwargs
)

optimizer : string or optimizer instance (default: rmsprop) Personally, if it's an image-related task, is it SGD?

ex) SGD, RMSprop, Adam, Adadelta,...

sgd = tf.keras.optimizers.SGD(
    learning_rate=0.01, momentum=0.0, nesterov=False, name='SGD', **kwargs
)

loss : string, object function or tf.keras.losses.Loss instance metrics: List of metrics used in training and test loss_weights: Weight the loss

④ Specify callbacks

You can set CSVLogger, History, ProgbarLogger, TensorBoard, EarlyStopping, ReduceLROnPlateau, etc. You can pass it when learning with model.fit. For details, please refer to Official Document.

Here are some examples.

tf.keras.callbacks.EarlyStopping(
    monitor='val_loss', min_delta=0, patience=0, verbose=0,
    mode='auto', baseline=None, restore_best_weights=False
)
tf.keras.callbacks.ModelCheckpoint(
    filepath, monitor='val_loss', verbose=0, save_best_only=False,
    save_weights_only=False, mode='auto', save_freq='epoch',
    options=None, **kwargs
)

filepath: string or PathLike, the path to save the model. The variables that can be specified are epoch, loss, acc, val_loss, val_acc. ex) filepath = '{val_loss:.2f}-{val_acc:.2f}.hdf5' monitor: What is the basis for saving the model? accuracy, val_accuracy, loss, val_loss

tf.keras.callbacks.LearningRateScheduler(
    schedule, verbose=0
)
tf.keras.callbacks.ReduceLROnPlateau(
    monitor='val_loss', factor=0.1, patience=10, verbose=0,
    mode='auto', min_delta=0.0001, cooldown=0, min_lr=0, **kwargs
)
tf.keras.callbacks.RemoteMonitor(
    root='http://localhost:9000', path='/publish/epoch/end/',
    field='data', headers=None, send_as_json=False
)
tf.keras.callbacks.CSVLogger(
    filename, separator=',', append=False
)

⑤ Start learning with model.fit

Example

history = model.fit(
        train_generator,
        steps_per_epoch=steps_per_epoch,
        validation_data=val_generator,
        validation_steps=validation_steps,
        epochs=CONFIG.epochs,
        shuffle=True,
        callbacks=[cp_callback])
fit(
    x=None, y=None, batch_size=None, epochs=1, verbose=1, callbacks=None,
    validation_split=0.0, validation_data=None, shuffle=True, class_weight=None,
    sample_weight=None, initial_epoch=0, steps_per_epoch=None,
    validation_steps=None, validation_batch_size=None, validation_freq=1,
    max_queue_size=10, workers=1, use_multiprocessing=False
)

validation_split: [0 ~ 1]. Let a validation data be part of the training data. The data after x, y is used (before shuffling). validation_data: validation data. Overridden if validation_split exists class_weight: How to focus on loss for classes with less data. Pass in dictionary type. Example) {0: 0.66, 1: 1.33} steps_per_epoch: Integer or None. Training data // Obtained by batch_size. validation_steps: Integer or None. Validation data // Obtained by batch_size.

⑥ Evaluation of the test with model.evaluate


test_loss, test_acc = model.evaluate(test_generator, steps=test_steps)
evaluate(
    x=None, y=None, batch_size=None, verbose=1, sample_weight=None, steps=None,
    callbacks=None, max_queue_size=10, workers=1, use_multiprocessing=False,
    return_dict=False
)

x: Input data. Numpy array, tensor, tf.data.dataset, (inputs, targets) or (inputs, targets, sample_weights) y: Target data. batch_size : Integer or None. verbose : 0 or 1. Verbosity mode. 0 = silent, 1 = progress bar. sample_weight : steps : Integer or None callbacks : list of keras.callbacks.Callback instances max_queue_size : Integer. workers : Integer. use_multiprocessing : boolean return_dict: If True, return metric results with dict

⑦ To get only the expected probability of the image, model.predict

predict(
    x, batch_size=None, verbose=0, steps=None, callbacks=None, max_queue_size=10,
    workers=1, use_multiprocessing=False
)

<< About the file created when the model is saved >>

** checkpoint **: Create only one file. You can check which file is the latest. You don't have to do it during the test because it is necessary for learning. Necessary when starting learning again from the saved data.

** XXXXX.data-0000-of-00001 **: A unique format that maps variable names as tensor values.

** XXXXX.index **: This file is a binary file. When data is saved in multiple steps, the ".data-0000-of-00001" file with the same name is uniquely determined as which data in which step.

At the end

It was a good opportunity to read through the detailed settings this time.

References

Recommended Posts

Easy image classification with TensorFlow
Challenge image classification with TensorFlow2 + Keras 3 ~ Visualize MNIST data ~
Image classification with wide-angle fundus image dataset
Challenge image classification by TensorFlow2 + Keras 4 ~ Let's predict with trained model ~
Challenge image classification with TensorFlow2 + Keras 9-Learning, saving and loading models-
Easy image processing in Python with Pillow
Zundokokiyoshi with TensorFlow
Breakout with Tensorflow
Cooking object detection with yolo + image classification
Challenge image classification with TensorFlow2 + Keras CNN 1 ~ Move for the time being ~
Judge Yosakoi Naruko by image classification of Tensorflow.
MNIST (handwritten digit) image classification with multi-layer perceptron
Easy Grad-CAM with pytorch-gradcam
Image processing with MyHDL
Image recognition with keras
Reading data with TensorFlow
Kyotei forecast with TensorFlow
Image processing with Python
Try regression with TensorFlow
Easy debugging with ipdb
Image normalization in TensorFlow
Easy TopView with OpenCV
Image Processing with PIL
Identify the name from the flower image with keras (tensorflow)
Challenge image classification with TensorFlow2 + Keras 6-Try preprocessing and classifying images prepared by yourself-
Image download with Flickr API
Translate Getting Started With TensorFlow
[PyTorch] Image classification of CIFAR-10
Try deep learning with TensorFlow
Easy tox environment with Jenkins
I tried AutoGluon's Image Classification
Read image coordinates with Python-matplotlib
Image processing with PIL (Pillow)
Use TensorFlow with Intellij IDEA
Approximate sin function with TensorFlow
Image editing with python OpenCV
"Garbage classification by image!" App creation diary day2-Fine-tuning with VGG16-
Document classification with Sentence Piece
Image upload & customization with django-ckeditor
Easy folder synchronization with Python
Sorting image files with Python (3)
Easy to make with syntax
Jetson Nano JETPACK 44.1 (2020/10/21) with Tensorflow
CNN (1) for image classification (for beginners)
Easy web scraping with Scrapy
Create Image Viewer with Tkinter
Image processing with Python (Part 1)
Stock price forecast with tensorflow
Tweet with image in Python
Sorting image files with Python
Image processing with Python (Part 3)
[Deep learning] Image classification with convolutional neural network [DW day 4]
Easy Python compilation with NUITKA-Utilities
Real-time image recognition on mobile devices with TensorFlow learning model
Image caption generation with Chainer
Easy HTTP server with Python
Easy proxy login with django-hijack
Get image features with OpenCV
Try TensorFlow MNIST with RNN
Image recognition with Keras + OpenCV
[Python] Easy reading of serial number image files with OpenCV