I had an opportunity to apply the weights obtained by learning with keras to my own neural network, so I investigated it.
The Keras Documentation says:
model.get_weights (): Returns a list of all model weight tensors (Numpy arrays).
So this function returns a list of multiple Numpy arrays, which represents the weight of the training result. However, it was not mentioned here what weight this part of the essential list points to in the neural network.
As a result of trial and error such as setting the weight obtained in my own neural network and various searches, I found that it has such a structure. The meaning of the symbols and the structure of the neural network are based on the figure in Reference.
## list[0]
[[w11(1) w12(1)]
[w21(1) w22(1)]]
## list[1]
[w10(1) w20(1)]
## list[2]
[[w11(2)]
[w12(2)]]
## list[3]
[w10(2)]
In the case of a three-layer fully connected neural network, in order from list [0], the input layer → the weight to the intermediate layer, the bias input to the intermediate layer, the intermediate layer → the weight to the output layer, and the bias input to the output layer. It seems.
http://ni4muraano.hatenablog.com/entry/2017/02/01/000000 https://keras.io/ja/models/about-keras-models/ https://groups.google.com/forum/#!topic/keras-users/kLm-bTvDTaw