Visualize the inner layer of a neural network

This is my first post. Also, I am a beginner in machine learning. I think that there are many parts that are difficult to see, such as saying strange things due to lack of understanding, but I would appreciate it if you could watch over me warmly!

Class initialization

visualize.py


from keras.datasets import mnist
from keras.utils import to_categorical
from keras.models import Model, model_from_json
from keras.layers import Input, Conv2D, MaxPooling2D, Dense, Flatten, Dropout
import numpy as np
import seaborn as sns
import os


class Visualize_CNN():
    def __init__(self):
        self.conv1_filter_num = 32
        self.conv1_filter_size = (3,3)
        self.conv1_strides = 1
        self.pool1_filter_size = (2,2)

        self.conv2_filter_num = 64
        self.conv2_filter_size = (5,5)
        self.conv2_strides = 1
        self.pool2_filter_size = (2,2)

        self.dense1_output = 1024
        self.dense2_output = 10

        self.epochs = 1
        self.batch_size = 128

        self.figsize = (10,10)
        self.save_file_path = "../data/model"

Read data

visualize.py


    def load_data(self):
        (x_train, y_train),(x_test,y_test) = mnist.load_data()
        x_train = x_train.astype("float32")/ 256
        x_train = x_train.reshape((-1,28,28,1))
        x_test = x_test.astype("float32")/ 256
        x_test = x_test.reshape((-1,28,28,1))
        y_train = to_categorical(y_train)
        y_test = to_categorical(y_test)
        
        return x_train, y_train, x_test, y_test

Preprocessing is performed by reading mnist data from keras.datasets. What I went to x_train and x_test was type specification and normalization. The normalization performed this time is ** Min-Max normalization **. If you write it in a mathematical formula,

y = \frac{x - x_{min}}{x_{max} - x_{min}}
x_{max}:Maximum value in the given data,
x_{min}:Is the minimum value in the given data

It will be. If you divide each given data by the width of the maximum and minimum values, you can scale the maximum value to 1 and the minimum value to 0, right? Therefore, I think it is ** Min-Max normalization **. The data handled this time is mnist. So the value of each pixel in grayscale? Since we know that is 0 to 255, 0 is entered in the minimum value part of the formula and 255 is entered in the maximum value part. y_train and y_test are one-hot label data. This time, by giving each argument to to_categorical in keras.utils, the data is automatically converted.

Build a model

build_model.py


    def creat_model(self):
        input_model = Input(shape=(28,28,1))
        conv1 = Conv2D(self.conv1_filter_num,
                       self.conv1_filter_size,
                       padding="same",
                       activation="relu")(input_model)
        pool1 = MaxPooling2D(self.pool1_filter_size)(conv1)
        conv2 = Conv2D(self.conv2_filter_num,
                       self.conv2_filter_size,
                       padding="same",
                       activation="relu"
                      )(pool1)
        pool2 = MaxPooling2D(self.pool2_filter_size)(conv2)
        flat = Flatten()(pool2)
        dense1 = Dense(self.dense1_output,
                      activation="relu")(flat)
        dropout = Dropout(0.25)(dense1)
        dense2 = Dense(self.dense2_output,
                      activation="softmax")(dropout)
        
        model = Model(inputs=input_model, output=dense2)
        return model

The model built this time is as simple as folding it twice and joining it. The number of filters (number of columns?) And size of each layer are listed in the ** class initialization ** part.

Model training and storage

visualize.py


    
    def train_and_save(self):
        x_train, y_train, x_test, y_test = self.load_data()
        model =  self.creat_model_()
        model.compile(optimizer="adam",
                     loss="categorical_crossentropy",
                     metrics=["accuracy"])
        #model.summary()
        history = model.fit(x_train, y_train, 
                           batch_size=self.batch_size,
                           epochs=self.epochs,
                           verbose=2,
                           validation_data=(x_test, y_test))
    json_string = model.to_json()
       open(os.path.join(self.save_file_path, "model.json"),"w").write(json_string)
       model.save_weights(os.path.join(self.save_file_path, "model_weights.h5"))
       print("saving succsessful")

Train the built model and save the model and the trained weights. It seems that model.save (save_file_path) saves the model and weight at the same time, but I didn't know when I wrote the code, so I save them separately.

Visualization of the middle layer

visualize.py



    def visualize(self):
        x_train,a,b,c = self.load_data()
        json_string = open(os.path.join(self.save_file_path, "model.json")).read()
        model = model_from_json(json_string)
        model.load_weights(os.path.join(self.save_file_path, "model_weights.hdf5"))
        layers = model.layers[1:5]
        outputs = [layer.output for layer in layers]
        acctivation_model = Model(inputs=model.input, output=outputs)
        acctivation_model.summary()

        image = x_train[1].reshape(-1,28,28,1)#If you want to change the input image x_train[j]Please change j!
        plt.imshow(image.reshape(28,28))
        activation = acctivation_model.predict(image)
        x_axis = 8
        y_axis = 8
        for j in range(len(activation)):
            cul_num = activation[j].shape[3]
            act = activation[j]
            plt.figure(figsize=(self.figsize))

            for i in range(cul_num):
                plt.subplot(8,8,i+1)
                sns.heatmap(act[0,:,:,i])
        plt.show()

The last saved model and weight are loaded, the model that outputs the output of the layer other than the fully connected layer is redefined, and the output is output as a heat map.

result

The result is as follows. ** Input image ** Figure_1.png ** Output of convolution layer 1 ** Figure_2.png ** Output of convolution layer 2 ** Figure_4.png

Summary

Above, I tried to visualize the middle layer of the neural network and the first post! was. If you do this here, it will be easier to see! Etc. advice is welcome! Thank you for watching till the end m (_ _) m

Recommended Posts

Visualize the inner layer of a neural network
Implement a 3-layer neural network
The story of making a music generation neural network
Implementation of a two-layer neural network 2
Touch the object of the neural network
Implementation of 3-layer neural network (no learning)
Understand the number of input / output parameters of a convolutional neural network
[NNabla] How to get the output (variable) of the middle layer of a pre-built network
Visualize the characteristic vocabulary of a document with D3.js
Basics of PyTorch (2) -How to make a neural network-
Implementation of a convolutional neural network using only Numpy
Visualize the orbit of Hayabusa2
[NNabla] How to add a new layer between the middle layers of a pre-built network
A network diagram was created with the data of COVID-19.
Construction of a neural network that reproduces XOR by Z3
How to easily draw the structure of a neural network on Google Colaboratory using "convnet-drawer"
What is a Convolutional Neural Network?
I implemented a two-layer neural network
Visualize the response status of the census 2020
The story of writing a program
A command to easily check the speed of the network on the console
[NNabla] How to remove the middle tier of a pre-built network
Let's summarize the basic functions of TensorFlow by creating a neural network that learns XOR gates
Compose with a neural network! Run Magenta
Build a classifier with a handwriting recognition rate of 99.2% with a TensorFlow convolutional neural network
Measure the relevance strength of a crosstab
I tried how to improve the accuracy of my own Neural Network
A quick overview of the Linux kernel
[NNabla] How to add a quantization layer to the middle layer of a trained model
Visualize the effects of deep learning / regularization
Persist the neural network built with PyBrain
[python] [meta] Is the type of python a type?
Implementation of "blurred" neural network using Chainer
Get the value of the middle layer of NN
A memo explaining the axis specification of axis
Get the filename of a directory (glob)
The story of blackjack A processing (python)
Add a layer using the Keras backend
Visualize the export data of Piyo log
Notice the completion of a time-consuming command
The result was better when the training data of the mini-batch was made a hybrid of fixed and random with a neural network.
Python vs Ruby "Deep Learning from scratch" Chapter 3 Implementation of 3-layer neural network
Experiment with various optimization algorithms with a neural network
Reinforcement learning 10 Try using a trained neural network.
Get the caller of a function in Python
Verification of Batch Normalization with multi-layer neural network
Make a copy of the list in Python
Find the number of days in a month
A note about the python version of python virtualenv
The story of making a lie news generator
Guidelines for Output Layer Design of Neural Networks
Calculate the probability of outliers on a boxplot
I ran the neural network on the actual FPGA
[Python] A rough understanding of the logging module
Output in the form of a python array
Recognition of handwritten numbers by multi-layer neural network
Visualize the behavior of the sorting algorithm with matplotlib
The story of making a mel icon generator
A discussion of the strengths and weaknesses of Python
I ran the TensorFlow tutorial with comments (first neural network: the beginning of the classification problem)
Parametric Neural Network