This article is an easy-to-understand output of ** Deep Learning from scratch Chapter 6 Error back propagation method **. I was able to understand it myself, so I hope you can read it comfortably. Also, I would be more than happy if you could refer to it when studying this book.
To implement backpropagation in a neural network, neurons need to be layers. It's easiest to think of a layer as a neuron that implements both forward and back propagation.
The minimum number of layers required to implement a neural network is ** Sigmoid layer, Relu layer, Affine layer, output layer to loss function **.
By implementing layers as classes and making them like parts, it becomes easier to operate when rearranging neural networks into various structures.
So, this time, I will layer the addition neuron and the multiplication neuron that I mentioned in the previous article.
class AddLayer:#Addition layer
def __init__(self):
pass#Do nothing
def forward(self, x, y):
out = x + y
return out
def backward(self, dout):
dx = dout * 1
dy = dout * 1 #At the time of addition, both variables inherit the previous derivative
return dx, dy
In the addition layer, the forward propagation process sums the values of the two variables and returns them, and the back propagation process inherits and returns the previous derivative.
class MulLayer:#Multiplication layer
def __init__(self):
self.x = None
self.y = None #Variable xy to instance variable
def forward(self, x, y):
self.x = x
self.y = y #Save the value of the variable xy as it will be used in the backpropagation process.
out = x * y
return out
def backward(self, dout):
dx = dout * self.y
dy = dout * self.x
return dx, dy
The multiplication layer returns by multiplying the variable xy in the forward propagation process, and finds and returns the derivative by multiplying the value of the other variable by the previous derivative in the back propagation process.
Recommended Posts