[Conclusion] If you change the weight, the graph will change. ▷ If you increase the weight value, the graph of the activation function becomes steeper. ▷ If the weight value is reduced, the graph of the activation function becomes looser.
[Explanation] We have considered a simple example in which one input neuron propagates forward. I used the Sigmoid function as the activation function.
In the first calculation, the output y is a linear (straight line) graph. ▷ y=wx+b
When the result is passed through the activation function, it becomes a non-linear (squishy line) graph. ▷ y'=1/(1+(-y))
You can see that the shape of the graph changes as you change the weight value.
[Conclusion] If you change the bias, the graph will change. ▷ If you increase the bias value, the graph of the activation function will translate to the right (plus). ▷ When the bias value is reduced, the activation function graph translates to the left (minus) direction.
You can freely change the shape of the graph by changing the values of weight and bias (and parameters).
Machine learning does this automatically and brings it closer to the ideal graph.
** "AI for cat allergies" ** https://t.co/4ltE8gzBVv?amp=1
We publish about machine learning on YouTube. If you have time, please take a look.
Recommended Posts