Python vs Ruby "Deep Learning from scratch" Chapter 4 Implementation of loss function

Overview

With reference to the code in Chapter 4 of the book "Deep Learning from Zero: Theory and Implementation of Deep Learning Learned in Python", the mean squared error and the cross entropy error are used as the loss function. ) Is implemented in Python and Ruby.

An external library is used in the calculation process. Use NumPy for Python and Numo :: NArray for Ruby.

If you need to build an environment, see here. → Python vs Ruby "Deep Learning from scratch" Chapter 1 Graph of sin and cos functions -Qiita

Implementation of mean squared error and cross entropy error

Python

import numpy as np

#Sum of squares error
def mean_squared_error(y, t):
  #Sum of the square of the difference between the output of the neural network and each element of the teacher data
  return 0.5 * np.sum((y-t)**2)

#Cross entropy error
def cross_entropy_error(y, t):
  delta = 1e-7 #Add a small value so as not to generate minus infinity
  return -np.sum(t * np.log(y + delta))

#test
t = [0, 0, 1, 0, 0, 0, 0, 0, 0, 0] #Correct answer is 1,Everything else is 0
y1 = [0.1, 0.05, 0.6, 0.0, 0.05, 0.1, 0.0, 0.1, 0.0, 0.0] #When the probability of 2 is the highest(0.6)
y2 = [0.1, 0.05, 0.1, 0.0, 0.05, 0.1, 0.0, 0.6, 0.0, 0.0] #If the probability of 7 is the highest(0.6)
print(mean_squared_error(np.array(y1), np.array(t)))
print(mean_squared_error(np.array(y2), np.array(t)))
print(cross_entropy_error(np.array(y1), np.array(t)))
print(cross_entropy_error(np.array(y2), np.array(t)))

Ruby

require 'numo/narray'

#Sum of squares error
def mean_squared_error(y, t)
  #Sum of the square of the difference between the output of the neural network and each element of the teacher data
  return 0.5 * ((y-t)**2).sum
end

#Cross entropy error
def cross_entropy_error(y, t)
  delta = 1e-7 #Add a small value so as not to generate minus infinity
  return -(t * Numo::NMath.log(y + delta)).sum
end

#test
t = [0, 0, 1, 0, 0, 0, 0, 0, 0, 0] #Correct answer is 1,Everything else is 0
y1 = [0.1, 0.05, 0.6, 0.0, 0.05, 0.1, 0.0, 0.1, 0.0, 0.0] #When the probability of 2 is the highest(0.6)
y2 = [0.1, 0.05, 0.1, 0.0, 0.05, 0.1, 0.0, 0.6, 0.0, 0.0] #If the probability of 7 is the highest(0.6)
puts mean_squared_error(Numo::DFloat.asarray(y1), Numo::DFloat.asarray(t))
puts mean_squared_error(Numo::DFloat.asarray(y2), Numo::DFloat.asarray(t))
puts cross_entropy_error(Numo::DFloat.asarray(y1), Numo::DFloat.asarray(t))
puts cross_entropy_error(Numo::DFloat.asarray(y2), Numo::DFloat.asarray(t))

Execution result

Python

0.0975
0.5975
0.510825457099
2.30258409299

Ruby

0.09750000000000003
0.5974999999999999
0.510825457099338
2.302584092994546

Reference material

--Python vs Ruby "Deep Learning from scratch" Summary --Qiita http://qiita.com/niwasawa/items/b8191f13d6dafbc2fede

Recommended Posts

Python vs Ruby "Deep Learning from scratch" Chapter 4 Implementation of loss function
Python vs Ruby "Deep Learning from scratch" Chapter 3 Implementation of 3-layer neural network
Python vs Ruby "Deep Learning from scratch" Chapter 3 Graph of step function, sigmoid function, ReLU function
Python vs Ruby "Deep Learning from scratch" Summary
Python vs Ruby "Deep Learning from scratch" Chapter 1 Graph of sin and cos functions
Python vs Ruby "Deep Learning from scratch" Chapter 2 Logic circuit by Perceptron
Deep Learning from scratch The theory and implementation of deep learning learned with Python Chapter 3
[Learning memo] Deep Learning from scratch ~ Implementation of Dropout ~
[Deep Learning from scratch] Implementation of Momentum method and AdaGrad method
[Learning memo] Deep Learning made from scratch [Chapter 7]
Deep learning / Deep learning made from scratch Chapter 6 Memo
[Learning memo] Deep Learning made from scratch [Chapter 5]
[Learning memo] Deep Learning made from scratch [Chapter 6]
Deep learning / Deep learning made from scratch Chapter 7 Memo
Learning record of reading "Deep Learning from scratch"
[Learning memo] Deep Learning made from scratch [~ Chapter 4]
Chapter 2 Implementation of Perceptron Cut out only the good points of deep learning made from scratch
Deep Learning from scratch ① Chapter 6 "Techniques related to learning"
Deep Learning from scratch
Chapter 1 Introduction to Python Cut out only the good points of deep learning made from scratch
[Deep Learning from scratch] Layer implementation from softmax function to cross entropy error
[Deep Learning from scratch] Initial value of neural network weight using sigmoid function
Othello ~ From the tic-tac-toe of "Implementation Deep Learning" (4) [End]
Deep Learning from scratch 1-3 chapters
Deep reinforcement learning 2 Implementation of reinforcement learning
[Deep Learning from scratch] Initial value of neural network weight when using Relu function
An amateur stumbled in Deep Learning from scratch Note: Chapter 1
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 5
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 2
An amateur stumbled in Deep Learning from scratch Note: Chapter 7
An amateur stumbled in Deep Learning from scratch Note: Chapter 5
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 7
[With simple explanation] Scratch implementation of deep Boltzmann machine with Python ②
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 4
[With simple explanation] Scratch implementation of deep Boltzmann machine with Python ①
An amateur stumbled in Deep Learning from scratch Note: Chapter 4
Making from scratch Deep Learning ❷ An amateur stumbled Note: Chapter 6
Deep Learning / Deep Learning from Zero 2 Chapter 4 Memo
Deep Learning / Deep Learning from Zero Chapter 3 Memo
Deep Learning / Deep Learning from Zero 2 Chapter 5 Memo
Chapter 3 Neural Network Cut out only the good points of deep learning made from scratch
Deep learning from scratch (cost calculation)
Deep Learning / Deep Learning from Zero 2 Chapter 7 Memo
Deep Learning / Deep Learning from Zero 2 Chapter 8 Memo
Deep Learning / Deep Learning from Zero Chapter 5 Memo
Deep Learning / Deep Learning from Zero Chapter 4 Memo
Deep Learning / Deep Learning from Zero 2 Chapter 3 Memo
Deep Learning memos made from scratch
Deep Learning / Deep Learning from Zero 2 Chapter 6 Memo
Write an impression of Deep Learning 3 framework edition made from scratch
"Deep Learning from scratch" Self-study memo (No. 10-2) Initial value of weight
Summary Note on Deep Learning -4.2 Loss Function-
Deep learning from scratch (forward propagation edition)
Othello-From the tic-tac-toe of "Implementation Deep Learning" (3)
Learning notes from the beginning of Python 1
Deep learning / Deep learning from scratch 2-Try moving GRU
"Deep Learning from scratch" in Haskell (unfinished)
[Windows 10] "Deep Learning from scratch" environment construction
Learning notes from the beginning of Python 2
[Deep Learning from scratch] About hyperparameter optimization
"Deep Learning from scratch" Self-study memo (Part 12) Deep learning