Let's play with Amedas data-Part 3

This is a continuation of Previous article.

From the AMeDAS data, I was able to do something like regression analysis with my own neural network for the time being. This time I will try the same thing with keras. This article is like a memo of the result of studying keras.

The first thing I did was install keras. The versions didn't fit, and I got stuck with an error, but it worked. Eventually it works with the following versions.

python:3.6.19 tensorflow:1.14.0 keras:2.2.0

Then, I investigated various usages and made a code using keras. I used an object called Sequential. The official website of here explains the detailed specifications, but it seems to be useful.

And I referred to the following article for how to use it. Example of extremely simple deep learning by Keras

According to this article, you can build a neural network with the following simple code. (For x (1 layer)-> 32 layers-> y (1 layer))

from keras.models import Sequential
from keras.layers import Activation, Dense

#Make a model for learning
model = Sequential()
#Fully connected layer(1 layer->32 layers)
model.add(Dense(input_dim=1, output_dim=32, bias=True))
#Activation function(Sigmoid function)
model.add(Activation("sigmoid"))

#Fully connected layer(32 layers->1 layer)
model.add(Dense(output_dim=1))
#Compile the model
model.compile(loss="mean_squared_error", optimizer="sgd", metrics=["accuracy"])
#Perform learning
model.fit(x, y, nb_epoch=1000, batch_size=32)

Is this all! ?? Wonderful. If you define a neural network model for a while and then compile it, it seems that the initialization of various parameters and the learning program will be defined without permission. And it seemed to learn by putting learning data (input and correct answer) in the fit method.

So, let's dig in the one for AMeDAS data analysis. I'll put all the code for the time being, it's uselessly long ...

import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

from keras.models import Sequential
from keras.layers import Activation, Dense


# deta making???
csv_input = pd.read_csv(filepath_or_buffer="data_out.csv",
                        encoding="ms932",
                        sep=",")

#Number of input items (number of lines)*The number of columns) will be returned.
print(csv_input.size)

#Returns the DataFrame object extracted only for the specified column.
x = np.array(csv_input[["hour"]])
y = np.array(csv_input[["wind"]])

# num of records
N = len(x)

#Normalization
x_max = np.max(x,axis=0)
x_min = np.min(x,axis=0)
y_max = np.max(y,axis=0)
y_min = np.min(y,axis=0)
x = (x - np.min(x,axis=0))/(np.max(x,axis=0) - np.min(x,axis=0))
y = (y - np.min(y,axis=0))/(np.max(y,axis=0) - np.min(y,axis=0))

#Make a model for learning
model = Sequential()
#Fully connected layer(1 layer->XXX layer)
model.add(Dense(input_dim=1, output_dim=32, bias=True))
#Activation function(Sigmoid function)
model.add(Activation("sigmoid"))

#Fully connected layer(XXX layer->1 layer)
model.add(Dense(output_dim=1))
#Compile the model
model.compile(loss="mean_squared_error", optimizer="sgd", metrics=["accuracy"])
#Perform learning
model.fit(x, y, nb_epoch=1000, batch_size=32)

#True value plot
plt.plot(x,y,marker='x',label="true")
#Calculate Keras results with inference,display
y_predict = model.predict(x)
#Keras calculation result plot
plt.plot(x,y_predict,marker='x',label="predict")
#Legend display
plt.legend()

Looking at the results on the console, the progress is good.

(Omission)
23/23 [==============================] - 0s 0us/step - loss: 0.0563 - acc: 0.0435
Epoch 994/1000
23/23 [==============================] - 0s 435us/step - loss: 0.0563 - acc: 0.0435
Epoch 995/1000
23/23 [==============================] - 0s 0us/step - loss: 0.0563 - acc: 0.0435
Epoch 996/1000
23/23 [==============================] - 0s 0us/step - loss: 0.0563 - acc: 0.0435
Epoch 997/1000
23/23 [==============================] - 0s 0us/step - loss: 0.0563 - acc: 0.0435
Epoch 998/1000
23/23 [==============================] - 0s 0us/step - loss: 0.0563 - acc: 0.0435
Epoch 999/1000
23/23 [==============================] - 0s 0us/step - loss: 0.0563 - acc: 0.0435
Epoch 1000/1000
23/23 [==============================] - 0s 435us/step - loss: 0.0563 - acc: 0.0435

It seems that it works well, but if you look at the result graph, ???

Figure 2020-04-30 203427.png

Hmmm, I've settled on a mere trendline like I did before. I tried various things such as playing with the number of middle layers, but there was no big change. Looking at the previous results, it seemed that we could catch a little more if we could set the initial values of the coefficients in the neural network well. Therefore, I designed a Class for setting the initial value of the neural network. The coefficient calculation part uses the previous one (more comments etc.) It's like making one step function with two nodes in the middle layer.

# init infomation for keras layers or models
class InitInfo:
    
    # constractor
    #  x:input y:output
    def __init__(self,x,y):
        self.x = x
        self.y = y
        
    # calc coefficient of keras models(1st layer)
    # input  s:changing point in [0,1]
    #        sign:[1]raise,[0]down
    # return b:coefficient of bias
    #        w:coefficient of x
    # notice - it can make like step function using this return values(s,sign)
    def calc_b_w(self,s,sign):
    
        N = 1000 #Temporary storage
        # s = -b/w
        if sign > 0:
            b = -N
        else:
            b = N
        if s != 0:
            w = -b/s
        else:
            w = 1
        return b,w
    
    # calc coefficient of keras models(1st and 2nd layer)
    def calc_w_h(self):
    
        K = len(self.x)
        # coefficient of 1st layer(x,w)
        w_array = np.zeros([K*2,2])
        # coefficient of 2nd layer
        h_array = np.zeros([K*2,1])
        
        w_idx = 0
        for k in range(K):
            # x[k] , y[k]
            # make one step function
            # startX : calc raise point in [0,1]
            if k > 0:
                startX = self.x[k] +  (self.x[k-1] - self.x[k])/2
            else:
                startX = 0
    
            # endX : calc down point in [0,1]
            if k < K-1:
                endX = self.x[k] + (self.x[k+1] - self.x[k])/2
            else:
                endX = 1
    
            # calc b,w
            if k > 0:
                b,w = self.calc_b_w(startX,1)
            else:
                # init???
                b = 100
                w = 1
    
            # stepfunction 1stHalf
            #            __________
            # 0 ________|
            #        
            w_array[w_idx,0] = w
            w_array[w_idx,1] = b
            h_array[w_idx,0] = self.y[k]
            w_idx += 1
            
            # stepfunction 2ndHalf
            #        
            # 0 __________
            #             |________
            b,w = self.calc_b_w(endX,1)
            w_array[w_idx,0] = w
            w_array[w_idx,1] = b
            h_array[w_idx,0] = self.y[k]*-1
            
            # shape of 1st + 2nd is under wave
            #            _
            # 0 ________| |________
            #
            
            w_idx += 1
        
        # record param
        self.w = w_array
        self.h = h_array
        self.w_init = w_array[:,0]
        self.b_init = w_array[:,1]
        self.paramN = len(h_array)
        return
    
    # for bias coefficients setting
    def initB(self, shape, name=None):
        #L = np.prod(shape)
        #value = np.random.randn(L).reshape(shape)*5
        value = self.b_init
        value = value.reshape(shape)
        return K.variable(value, name=name)

    # for w coefficients (x) setting
    def initW(self, shape, name=None):
        #L = np.prod(shape)
        #value = np.random.random(shape)
        #value = np.random.randn(L).reshape(shape)*5
        value = self.w_init
        value = value.reshape(shape)
        return K.variable(value, name=name)
    
    # for h coefficients setting
    def initH(self, shape, name=None):
        #L = np.prod(shape)
        #value = np.random.randn(L).reshape(shape)*1
        value = self.h
        value = value.reshape(shape)
        return K.variable(value, name=name)

initB(self, shape, name=None): initW(self, shape, name=None): initH(self, shape, name=None): The three methods in are intended to be used in the function for setting the coefficients of the Dense object. Some comments are made when the random numbers are completely inserted (for research and debugging).

By taking the number of nodes in the middle layer from the members of this InitInfo object, it seems that the learning part etc. will move if it changes to the following code.

# create InitInfo object
objInitInfo = InitInfo(x,y)
# calc init value of w and h(and bias)
objInitInfo.calc_w_h()

#Make a model for learning
model = Sequential()
#Fully connected layer(1 layer->XXX layer)
model.add(Dense(input_dim=1, output_dim=objInitInfo.paramN,
                bias=True,
                kernel_initializer=objInitInfo.initW,
                bias_initializer=objInitInfo.initB))
#Activation function(Sigmoid function)
model.add(Activation("sigmoid"))

#Fully connected layer(XXX layer->1 layer)
model.add(Dense(output_dim=1, kernel_initializer=objInitInfo.initH))
#Compile the model
model.compile(loss="mean_squared_error", optimizer="sgd", metrics=["accuracy"])
#Perform learning
model.fit(x, y, nb_epoch=1000, batch_size=32)

For the initial value setting option name etc., I referred to Keras official website. I will paste the entire code that uses this below.

sample_KerasNewral.py


import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt

from keras.models import Sequential
from keras.layers import Activation, Dense
from keras import backend as K


# init infomation for keras layers or models
class InitInfo:
    
    # constractor
    #  x:input y:output
    def __init__(self,x,y):
        self.x = x
        self.y = y
        
    # calc coefficient of keras models(1st layer)
    # input  s:changing point in [0,1]
    #        sign:[1]raise,[0]down
    # return b:coefficient of bias
    #        w:coefficient of x
    # notice - it can make like step function using this return values(s,sign)
    def calc_b_w(self,s,sign):
    
        N = 1000 #Temporary storage
        # s = -b/w
        if sign > 0:
            b = -N
        else:
            b = N
        if s != 0:
            w = -b/s
        else:
            w = 1
        return b,w
    
    # calc coefficient of keras models(1st and 2nd layer)
    def calc_w_h(self):
    
        K = len(self.x)
        # coefficient of 1st layer(x,w)
        w_array = np.zeros([K*2,2])
        # coefficient of 2nd layer
        h_array = np.zeros([K*2,1])
        
        w_idx = 0
        for k in range(K):
            # x[k] , y[k]
            # make one step function
            # startX : calc raise point in [0,1]
            if k > 0:
                startX = self.x[k] +  (self.x[k-1] - self.x[k])/2
            else:
                startX = 0
    
            # endX : calc down point in [0,1]
            if k < K-1:
                endX = self.x[k] + (self.x[k+1] - self.x[k])/2
            else:
                endX = 1
    
            # calc b,w
            if k > 0:
                b,w = self.calc_b_w(startX,1)
            else:
                # init???
                b = 100
                w = 1
    
            # stepfunction 1stHalf
            #            __________
            # 0 ________|
            #        
            w_array[w_idx,0] = w
            w_array[w_idx,1] = b
            h_array[w_idx,0] = self.y[k]
            w_idx += 1
            
            # stepfunction 2ndHalf
            #        
            # 0 __________
            #             |________
            b,w = self.calc_b_w(endX,1)
            w_array[w_idx,0] = w
            w_array[w_idx,1] = b
            h_array[w_idx,0] = self.y[k]*-1
            
            # shape of 1st + 2nd is under wave
            #            _
            # 0 ________| |________
            #
            
            w_idx += 1
        
        # record param
        self.w = w_array
        self.h = h_array
        self.w_init = w_array[:,0]
        self.b_init = w_array[:,1]
        self.paramN = len(h_array)
        return
    
    # for bias coefficients setting
    def initB(self, shape, name=None):
        #L = np.prod(shape)
        #value = np.random.randn(L).reshape(shape)*5
        value = self.b_init
        value = value.reshape(shape)
        return K.variable(value, name=name)

    # for w coefficients (x) setting
    def initW(self, shape, name=None):
        #L = np.prod(shape)
        #value = np.random.random(shape)
        #value = np.random.randn(L).reshape(shape)*5
        value = self.w_init
        value = value.reshape(shape)
        return K.variable(value, name=name)
    
    # for h coefficients setting
    def initH(self, shape, name=None):
        #L = np.prod(shape)
        #value = np.random.randn(L).reshape(shape)*1
        value = self.h
        value = value.reshape(shape)
        return K.variable(value, name=name)

 
# deta making???
csv_input = pd.read_csv(filepath_or_buffer="data_out.csv",
                        encoding="ms932",
                        sep=",")

#Number of input items (number of lines)*The number of columns) will be returned.
print(csv_input.size)

#Returns the DataFrame object extracted only for the specified column.
x = np.array(csv_input[["hour"]])
y = np.array(csv_input[["wind"]])

# num of records
N = len(x)

#Normalization
x_max = np.max(x,axis=0)
x_min = np.min(x,axis=0)
y_max = np.max(y,axis=0)
y_min = np.min(y,axis=0)
x = (x - np.min(x,axis=0))/(np.max(x,axis=0) - np.min(x,axis=0))
y = (y - np.min(y,axis=0))/(np.max(y,axis=0) - np.min(y,axis=0))

# create InitInfo object
objInitInfo = InitInfo(x,y)
# calc init value of w and h(and bias)
objInitInfo.calc_w_h()

#Make a model for learning
model = Sequential()
#Fully connected layer(1 layer->XXX layer)
model.add(Dense(input_dim=1, output_dim=objInitInfo.paramN,
                bias=True,
                kernel_initializer=objInitInfo.initW,
                bias_initializer=objInitInfo.initB))
#Activation function(Sigmoid function)
model.add(Activation("sigmoid"))

#Fully connected layer(XXX layer->1 layer)
model.add(Dense(output_dim=1, kernel_initializer=objInitInfo.initH))
#Compile the model
model.compile(loss="mean_squared_error", optimizer="sgd", metrics=["accuracy"])
#Perform learning
model.fit(x, y, nb_epoch=1000, batch_size=32)

#True value plot
plt.plot(x,y,marker='x',label="true")
#Calculate Keras results with inference,display
y_predict = model.predict(x)
#Keras calculation result plot
plt.plot(x,y_predict,marker='x',label="predict")
#Legend display
plt.legend()

Looking at the result graph ???

Figure 2020-04-30 204804.png

They almost matched! Like last time, I think it's overfitting, but I wanted to do this, so it seems like it worked.

When the initial value of h is completely random (normal distribution) ???

Figure 2020-04-30 205037.png

When N = 1000, the convergence was not so good, so if I increased it a little, it became like the above. It feels pretty good. On the other hand, if w and b are completely set to random numbers, ???

Figure 2020-04-30 205237.png

It looks like a pretty rough curve. Since the initial value is a random number, it is likely to change slightly each time it is executed.

Like this, even if you use keras, you can select the initial value quite freely, and it seems that the behavior changes accordingly. It's an interesting content.

This time I wrote it as a memo of the study result of how to use keras. There are still various functions that seem to be fun, such as callbacks, so I will continue to look at them, and next I will look at the classification system.

Recommended Posts

Let's play with Amedas data-Part 1
Let's play with Amedas data-Part 4
Let's play with Amedas data-Part 3
Let's play with Amedas data-Part 2
Let's play with 4D 4th
Play with Prophet
[Introduction to WordCloud] Let's play with scraping ♬
Play with PyTorch
Play with 2016-Python
Play with Pyramid
Play with Fathom
Python hand play (let's get started with AtCoder?)
[Piyopiyokai # 1] Let's play with Lambda: Creating a Lambda function
Play with Othello (Reversi)
[Let's play with Python] Make a household account book
Let's play with JNetHack 3.6.2 which is easier to compile!
[Piyopiyokai # 1] Let's play with Lambda: Get a Twitter account
[Piyopiyokai # 1] Let's play with Lambda: Creating a Python script
Play with reinforcement learning with MuZero
Play with push notifications with imap4lib
Let's run Excel with Python
Play around with Linux partitions
Let's make dice with tkinter
[Let's play with Python] Image processing to monochrome and dots
Let's write python with cinema4d.
Play RocketChat with API / Python
Let's do R-CNN with Sklearn-theano
Let's build git-cat with Python
Play with ASE MD module
Play with A3RT (Text Suggest)
[Let's play with Python] Aiming for automatic sentence generation ~ Completion of automatic sentence generation ~
Let's execute commands regularly with cron!
Let's upload S3 files with CLI
Play with numerical calculation of magnetohydrodynamics
Play with a turtle with turtle graphics (Part 1)
Let's make a GUI with python.
Play with Poincare series and SymPy
Let's learn Deep SEA with Selene
Let's make a breakout with wxPython
Let's make Othello AI with Chainer-Part 1-
Play with Pythonista UI implementation [Action implementation]
Play with PIR sensor module [DSUN-PIR]
Let's do image scraping with Python
Let's make a graph with python! !!
Let's make a supercomputer with xCAT
Let's make Othello AI with Chainer-Part 2-
Spark play with WSL anaconda jupyter (2)
Let's recognize emotions with Azure Face
Play with Turtle on Google Colab
Play with demons because it's setsubun
Let's analyze voice with Python # 1 FFT
Let's play with the corporate analysis data set "CoARiJ" created by TIS ①
Let's make an image recognition model with your own data and play!
[Let's play with Python] Aiming for automatic sentence generation ~ Perform morphological analysis ~
Let's play with the corporate analysis data set "CoARiJ" created by TIS ②