Nice to meet you all. This article is just a line-by-line explanation of mnist_mlp.py. It is intended for people who are interested in AI but have not touched it yet. I think that if you read this, you should be able to understand the basic learning flow of deep learning. (Originally, it was created in-house with the intention of using it for training)
All three times are planned.
Since MNIST is an image, it's better to have a GPU to run this code (it's a bit painful on a CPU). The recommended method is to use Google Colaboratory. There are only two things to do. · Open a new notebook in Python 3 · Enable GPU from runtime You can now use the GPU. Just paste the code into the cell and execute it (shortcut is CTRL + ENTER) and it will work.
A dataset of handwritten text images, often used in machine learning tutorials. Content: Handwritten characters from 0 to 9 Image size: 28pix * 28pix Color: black and white Data size: 70,000 (training data 60,000, test data 10,000 images and labels are available)
Multilayer perceptron, a multilayer perceptron. mnist is image data, but it can be trained as mlp by changing the shape of the image data from (28, 28) to (784,). (The accuracy is higher for CNN, which we will do in Part 2.)
It is a code to create a model that judges handwritten characters of mnist using Keras and TensorFlow. Create a model that receives 10 types of handwritten characters from 0 to 9 as input and classifies them into 10 types from 0 to 9.
'''Trains a simple deep NN on the MNIST dataset.
Gets to 98.40% test accuracy after 20 epochs
(there is *a lot* of margin for parameter tuning).
2 seconds per epoch on a K520 GPU.
'''
#No special code needed (Python version 3 but needed if the code is written in Python 2)
from __future__ import print_function
#Import the required libraries
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.optimizers import RMSprop
#Specify the constants together first
batch_size = 128 #Batch size. Data size to be learned at one time
num_classes = 10 #Number of labels to classify. This time, we will classify handwritten images into 10 types from 0 to 9.
epochs = 20 #Number of epochs. How many times to learn all data
#Read mnist data and train data(60,000 cases)And test data(10,000 cases)Divide into
(x_train, y_train), (x_test, y_test) = mnist.load_data()
'''Reshape and match the data format so that it can be used as input data in mlp
x_train:(60000, 28, 28) ->(60000, 784) 28pix*28pix images in a row
x_test:(10000, 28, 28) ->(10000, 784) 28pix*28pix images in a row'''
x_train = x_train.reshape(60000, 784)
x_test = x_test.reshape(10000, 784)
#Image data takes a value from 0 to 255, so standardize the data by dividing by 255.
# .astype('float32')Convert the data type with.(Otherwise you should get an error when you break)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
#Output the number of data and check
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
#Label data one-hot-Vectorize
'''one-hot-The image of vector looks like this
label 0 1 2 3 4 5 6 7 8 9
0: [1,0,0,0,0,0,0,0,0,0]
8: [0,0,0,0,0,0,0,0,1,0]'''
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
About standardization: The value of each pixel in the image is from 0 to 255. This is an image that converts this to 0 ~ 1. When machine learning with images, the value is standardized by dividing by 255.
About one-hot-vector: This time, there are 10 types of labels from 0 to 9, and each is represented by a number from 0 to 9. However, the numbers on the label itself are meaningless because I just want to classify them into 10 types. Therefore, by one-hot-vector, it is converted so that only 0 and 1 can represent which label.
#Instantiate the Sequential class
model = Sequential()
#Middle layer
#Added fully connected layer (512 units, activation function: Relu, received input size: 784)
model.add(Dense(512, activation='relu', input_shape=(784,)))
# 0.2 chance to drop out
model.add(Dropout(0.2))
#Added fully connected layer (512 units, activation function: Relu, input size to be received is automatically determined)
model.add(Dense(512, activation='relu'))
# 0.2 chance to drop out
model.add(Dropout(0.2))
#Output layer
#Added fully connected layer (10 units, activation function: SoftMax, input size to be received is automatically determined)
model.add(Dense(num_classes, activation='softmax'))
#Visualize the structure of the model
model.summary()
The Sequential model is a model made by stacking layers of DNN. You need to specify input_shape only for the very first layer. Since the activation function of the output layer is a multi-value classification model this time, softmax is used.
#Set up the learning process
model.compile(
#Set the loss function. This time it's a classification, so categorical_crossentropy
loss='categorical_crossentropy',
#Specify the optimization algorithm. Tweak the learning rate etc.
optimizer=RMSprop(),
#Specify the evaluation function
metrics=['accuracy'])
#To learn
history = model.fit(
#Training data, labels
x_train, y_train,
#Batch size (128)
batch_size=batch_size,
#Number of epochs (20)
epochs=epochs,
#Display the progress of learning as a bar graph in real time(Hide at 0)
verbose=1,
#test data(To test each epoch and calculate the error)
validation_data=(x_test, y_test))
After defining the model, specify the loss function and optimization algorithm and compile. Then pass the data to the model for training. In order to make a better model, it is necessary to try various changes such as optimization algorithm, batch size, number of epochs, etc.
#Pass test data(verbose=0 does not give a progress message)
score = model.evaluate(x_test, y_test, verbose=0)
#Output generalization error
print('Test loss:', score[0])
#Output generalization performance
print('Test accuracy:', score[1])
After learning, use the test data to evaluate how much performance you have achieved. The lower the loss and the higher the accuracy, the better the model.
Recommended Posts