With reference to the code in Chapter 4 of the book "Deep Learning from Zero: Theory and Implementation of Deep Learning Learned in Python", the mean squared error and the cross entropy error are used as the loss function. ) Is implemented in Python and Ruby.
An external library is used in the calculation process. Use NumPy for Python and Numo :: NArray for Ruby.
If you need to build an environment, see here. → Python vs Ruby "Deep Learning from scratch" Chapter 1 Graph of sin and cos functions -Qiita
Python
import numpy as np
#Sum of squares error
def mean_squared_error(y, t):
#Sum of the square of the difference between the output of the neural network and each element of the teacher data
return 0.5 * np.sum((y-t)**2)
#Cross entropy error
def cross_entropy_error(y, t):
delta = 1e-7 #Add a small value so as not to generate minus infinity
return -np.sum(t * np.log(y + delta))
#test
t = [0, 0, 1, 0, 0, 0, 0, 0, 0, 0] #Correct answer is 1,Everything else is 0
y1 = [0.1, 0.05, 0.6, 0.0, 0.05, 0.1, 0.0, 0.1, 0.0, 0.0] #When the probability of 2 is the highest(0.6)
y2 = [0.1, 0.05, 0.1, 0.0, 0.05, 0.1, 0.0, 0.6, 0.0, 0.0] #If the probability of 7 is the highest(0.6)
print(mean_squared_error(np.array(y1), np.array(t)))
print(mean_squared_error(np.array(y2), np.array(t)))
print(cross_entropy_error(np.array(y1), np.array(t)))
print(cross_entropy_error(np.array(y2), np.array(t)))
Ruby
require 'numo/narray'
#Sum of squares error
def mean_squared_error(y, t)
#Sum of the square of the difference between the output of the neural network and each element of the teacher data
return 0.5 * ((y-t)**2).sum
end
#Cross entropy error
def cross_entropy_error(y, t)
delta = 1e-7 #Add a small value so as not to generate minus infinity
return -(t * Numo::NMath.log(y + delta)).sum
end
#test
t = [0, 0, 1, 0, 0, 0, 0, 0, 0, 0] #Correct answer is 1,Everything else is 0
y1 = [0.1, 0.05, 0.6, 0.0, 0.05, 0.1, 0.0, 0.1, 0.0, 0.0] #When the probability of 2 is the highest(0.6)
y2 = [0.1, 0.05, 0.1, 0.0, 0.05, 0.1, 0.0, 0.6, 0.0, 0.0] #If the probability of 7 is the highest(0.6)
puts mean_squared_error(Numo::DFloat.asarray(y1), Numo::DFloat.asarray(t))
puts mean_squared_error(Numo::DFloat.asarray(y2), Numo::DFloat.asarray(t))
puts cross_entropy_error(Numo::DFloat.asarray(y1), Numo::DFloat.asarray(t))
puts cross_entropy_error(Numo::DFloat.asarray(y2), Numo::DFloat.asarray(t))
Python
0.0975
0.5975
0.510825457099
2.30258409299
Ruby
0.09750000000000003
0.5974999999999999
0.510825457099338
2.302584092994546
--Python vs Ruby "Deep Learning from scratch" Summary --Qiita http://qiita.com/niwasawa/items/b8191f13d6dafbc2fede
Recommended Posts