When I was wondering what to study in the future, I thought that I should acquire AI-related knowledge. I will keep the learning history in Qiita. In the same way, I hope it helps those who are trying to learn machine learning.
OS:windows 10 python3
The perceptron receives multiple signals as input values and outputs them to one signal. The image is like creating a signal flow and transmitting information to the output destination. It can be said that the signal of Perceptron is "0" or "1" with two choices of flowing / not flowing. The figure below shows an example of a 2-input 1-output perceptron.
x_1,x_2 =input signal\\
w_1,w_2 =weight\\
y=Output signal
Each time the input signal is sent to a neuron, it is multiplied by a unique weight. If the sum exceeds a certain limit value, 1 is output. This is called ** neurons firing **. This limit value will be used as the threshold value in the future. (Lower formula)
\theta =Threshold
The above can be summarized and expressed in the formula below.
f(x) = \left\{
\begin{array}{ll}
1 & (w1x1 + w2x2 \, > \, \theta) \\
0 & (w1x1 + w2x2 \, \leqq \, \theta)
\end{array}
\right.
Perceptron has a weight for each of multiple input signals, and it can be seen that the larger the weight, the more important information.
Bias is a parameter that adjusts the ease of firing of neurons. (Adjustment of the degree of output "1") The figure and formula are displayed below.
\Replace theta with b\\
f(x) = \left\{
\begin{array}{ll}
1 & (w1x1 + w2x2 \, > \, b) \\
0 & (w1x1 + w2x2 \, \leqq \, b)
\end{array}
\right.
\\Transition\\
f(x) = \left\{
\begin{array}{ll}
1 & (b+w1x1 + w2x2 \, > \, 0) \\
0 & (b+w1x1 + w2x2 \, \leqq \, 0)
\end{array}
\right.
As the above equation shows, does the sum of the input signal, the weighted value, and the bias exceed 0? You can see that the output value can be controlled based on this.
I'll set the weights and biases appropriately and run it in Python.
1-1perceptron_and_bias.py
# coding: utf-8
import numpy as np
#Input value
x=np.array([0,1])
#weight
w = np.array([0.5, 0.5])
#Input value
b = -0.7
print(x * w)
print(np.sum(x * w)+b)
Execution result
[0. 0.5]
-0.19999999999999996
With bias, I was able to set the result below 0.
I think it will be a good output if you remember it while running it in Python. In the area of machine learning, we will continue to update it because it is not the tip of the iceberg.
Recommended Posts