This is my first post. Also, I am a beginner in machine learning. I think that there are many parts that are difficult to see, such as saying strange things due to lack of understanding, but I would appreciate it if you could watch over me warmly!
visualize.py
from keras.datasets import mnist
from keras.utils import to_categorical
from keras.models import Model, model_from_json
from keras.layers import Input, Conv2D, MaxPooling2D, Dense, Flatten, Dropout
import numpy as np
import seaborn as sns
import os
class Visualize_CNN():
def __init__(self):
self.conv1_filter_num = 32
self.conv1_filter_size = (3,3)
self.conv1_strides = 1
self.pool1_filter_size = (2,2)
self.conv2_filter_num = 64
self.conv2_filter_size = (5,5)
self.conv2_strides = 1
self.pool2_filter_size = (2,2)
self.dense1_output = 1024
self.dense2_output = 10
self.epochs = 1
self.batch_size = 128
self.figsize = (10,10)
self.save_file_path = "../data/model"
visualize.py
def load_data(self):
(x_train, y_train),(x_test,y_test) = mnist.load_data()
x_train = x_train.astype("float32")/ 256
x_train = x_train.reshape((-1,28,28,1))
x_test = x_test.astype("float32")/ 256
x_test = x_test.reshape((-1,28,28,1))
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
return x_train, y_train, x_test, y_test
Preprocessing is performed by reading mnist data from keras.datasets. What I went to x_train and x_test was type specification and normalization. The normalization performed this time is ** Min-Max normalization **. If you write it in a mathematical formula,
y = \frac{x - x_{min}}{x_{max} - x_{min}}
x_{max}:Maximum value in the given data,
x_{min}:Is the minimum value in the given data
It will be. If you divide each given data by the width of the maximum and minimum values, you can scale the maximum value to 1 and the minimum value to 0, right? Therefore, I think it is ** Min-Max normalization **. The data handled this time is mnist. So the value of each pixel in grayscale? Since we know that is 0 to 255, 0 is entered in the minimum value part of the formula and 255 is entered in the maximum value part. y_train and y_test are one-hot label data. This time, by giving each argument to to_categorical in keras.utils, the data is automatically converted.
build_model.py
def creat_model(self):
input_model = Input(shape=(28,28,1))
conv1 = Conv2D(self.conv1_filter_num,
self.conv1_filter_size,
padding="same",
activation="relu")(input_model)
pool1 = MaxPooling2D(self.pool1_filter_size)(conv1)
conv2 = Conv2D(self.conv2_filter_num,
self.conv2_filter_size,
padding="same",
activation="relu"
)(pool1)
pool2 = MaxPooling2D(self.pool2_filter_size)(conv2)
flat = Flatten()(pool2)
dense1 = Dense(self.dense1_output,
activation="relu")(flat)
dropout = Dropout(0.25)(dense1)
dense2 = Dense(self.dense2_output,
activation="softmax")(dropout)
model = Model(inputs=input_model, output=dense2)
return model
The model built this time is as simple as folding it twice and joining it. The number of filters (number of columns?) And size of each layer are listed in the ** class initialization ** part.
visualize.py
def train_and_save(self):
x_train, y_train, x_test, y_test = self.load_data()
model = self.creat_model_()
model.compile(optimizer="adam",
loss="categorical_crossentropy",
metrics=["accuracy"])
#model.summary()
history = model.fit(x_train, y_train,
batch_size=self.batch_size,
epochs=self.epochs,
verbose=2,
validation_data=(x_test, y_test))
json_string = model.to_json()
open(os.path.join(self.save_file_path, "model.json"),"w").write(json_string)
model.save_weights(os.path.join(self.save_file_path, "model_weights.h5"))
print("saving succsessful")
Train the built model and save the model and the trained weights. It seems that model.save (save_file_path) saves the model and weight at the same time, but I didn't know when I wrote the code, so I save them separately.
visualize.py
def visualize(self):
x_train,a,b,c = self.load_data()
json_string = open(os.path.join(self.save_file_path, "model.json")).read()
model = model_from_json(json_string)
model.load_weights(os.path.join(self.save_file_path, "model_weights.hdf5"))
layers = model.layers[1:5]
outputs = [layer.output for layer in layers]
acctivation_model = Model(inputs=model.input, output=outputs)
acctivation_model.summary()
image = x_train[1].reshape(-1,28,28,1)#If you want to change the input image x_train[j]Please change j!
plt.imshow(image.reshape(28,28))
activation = acctivation_model.predict(image)
x_axis = 8
y_axis = 8
for j in range(len(activation)):
cul_num = activation[j].shape[3]
act = activation[j]
plt.figure(figsize=(self.figsize))
for i in range(cul_num):
plt.subplot(8,8,i+1)
sns.heatmap(act[0,:,:,i])
plt.show()
The last saved model and weight are loaded, the model that outputs the output of the layer other than the fully connected layer is redefined, and the output is output as a heat map.
The result is as follows. ** Input image ** ** Output of convolution layer 1 ** ** Output of convolution layer 2 **
Above, I tried to visualize the middle layer of the neural network and the first post! was. If you do this here, it will be easier to see! Etc. advice is welcome! Thank you for watching till the end m (_ _) m