Input, hidden layer, output layer There were times when XOR could be realized by simply making these three, and there were times when it was not a layer, so I tried to improve the accuracy.
By the way, the output when it doesn't work
#input[ (0,0) , (0,1) , (1,0) , (1,1) ]
#output[0.01... , 0.63... , 0.62... , 0.66...]
#answer[0 , 1 , 1 , 0 ]
With that feeling, the value when (1,1) is entered will be greatly deviated. The goal is to do something about this.
--Increase the number of hidden node nodes from 2 to 3 ――The learning rate decreases as you learn --Increase the number of people to learn
This was done by adding a simple expression. I tried about 10 times, but it didn't change much. If you increase the number of trials a little more, the result may change. However, there was not much change in the way the numbers fluctuated, so we stopped after about 10 times.
I later realized that this was the last precision issue, but of course it didn't change much. Because, when it shifts, it shifts greatly from the beginning. So it doesn't make much sense if you can't adjust how the initial action works.
I don't know what to express, so it's expressed like this, but in the image, if the start is bad, I will reduce the number of trials by epochs and increase it, and let only the excellent guys learn again. is. In other words, I tried to make some of the excellent guys and let them learn, and then let them learn among the excellent guys.
Until now, I felt that the answer was close to the correct answer with a probability of about 1/5, but when I changed it, it was close to the correct answer with a probability of about 1/2! When I increased the number of trials, the difference became even wider.
I don't know what to call it, but it was the best feeling to have some people to learn and let them learn among the best guys! ~~ After all, is it a world where only excellent people survive ... scary ... ~~ AI and humans are the same!
Recommended Posts