To understand the convolutional neural network, I implemented the algorithm in Python using only Numpy, without relying on the library for deep learning. In the future, I would like to improve it to be compatible with GPU. In addition, since it has not been verified whether the inverse error is properly propagated, the one under development is posted.
GitHub https://github.com/NaotoMasuzawa/Deep_Learning/tree/master/Python_CNN
In implementing the code, I referred to Yusuke Sugomori's GitHub and technical books. I am very grateful.
GitHub https://github.com/yusugomori Technical book https://www.amazon.co.jp/Java-Learning-Essentials-Yusuke-Sugomori-ebook/dp/B01956B5RQ
In addition to the normal convolutional neural network, a dropout that stochastically reduces some of the neurons to 0 is implemented in the hidden layer. In addition, ReLU is used as the activation function.
Build.py Set the layer configuration and parameters here. Also, when executing the code, You should be able to set the parameters here and run it with python Build.py.
Convolutional_Neural_Network.py Each class rises here depending on the structure of the layer.
Conv_Pool_Layer.py A convolution layer and a pooling layer are implemented. For statements are often used for the tatami layer, Since the relationship between the indexes becomes complicated, I made each one meaningful. Therefore, the width of a part of the code is longer.
Hidden_Layer.py A dropout on a regular multi-layer perceptron used in fully connected layers It is implemented.
Logistic_Regression.py Logistic regression is implemented.
utils.py The activation function (ReLU) is implemented. If you want to compare it with other activation functions (such as sigmoid function), write it here.
Since this code is not used to perform any heavy processing, it may contain bugs. In that case, I would appreciate it if you could report it.
Recommended Posts