After studying using one of the deep learning frameworks chainer, I tried to approximate the sin function, but it didn't work. Code that just rewrites mnist in the chainer example. I don't know if the dataset is bad or if there is a bug in the learning process. I would be very happy if you could point it out.
First, create a dataset
make_dataset.py
def make_dateset():
x_train = np.arange(0,3.14*50.0,0.05)
y_train = np.sin(x_train).astype(np.float32)
x_test = np.arange(3.14*50.0,3.14 * 60.0,0.05)
y_test = np.sin(x_test).astype(np.float32)
return x_train.astype(np.float32),y_train.astype(np.float32),x_test.astype(np.float32),y_test.astype(np.float32)
x_train contains numbers from 0 to 3.14 * 50 in 0.05 increments in np.array ex) [0.0,0.05,0.10,......,157.0]
y_train contains the value of x_train assigned to the sin function. ex) [0.0,0.47942553860420301,0.09983341......,]
x_test and y_test are similar. Only the range is different
sin_test.py
import numpy as np
import six
import chainer
from chainer import computational_graph as c
from chainer import cuda
import chainer.functions as F
from chainer import optimizers
import matplotlib.pyplot as plt
import csv
def make_dateset():
x_train = np.arange(0,3.14*50.0,0.05)
y_train = np.sin(x_train).astype(np.float32)
x_test = np.arange(3.14*50.0,3.14 * 60.0,0.05)
y_test = np.sin(x_test).astype(np.float32)
return x_train.astype(np.float32),y_train.astype(np.float32),x_test.astype(np.float32),y_test.astype(np.float32)
def forward(x_data,y_data,train = True):
x,t = chainer.Variable(x_data),chainer.Variable(y_data)
h1 = F.dropout(F.relu(model.l1(x)), train=train)
h2 = F.dropout(F.relu(model.l2(h1)), train=train)
h3 = F.dropout(F.relu(model.l3(h1)), train=train)
y = model.l4(h3)
return F.mean_squared_error(y,t),y
if __name__ == "__main__":
x_train,y_train,x_test,y_test = make_dateset()
x_train,y_train = x_train.reshape(len(x_train),1),y_train.reshape(len(y_train),1)
x_test,y_test = x_test.reshape(len(x_test),1),y_test.reshape(len(y_test),1)
y_t,y_p,ll = [],[],[]
xp = np
batchsize = 10
N = len(x_train)
N_test = len(x_test)
n_epoch = 100
n_units = 20
pred_y = []
model = chainer.FunctionSet(l1=F.Linear(1, n_units),
l2=F.Linear(n_units, n_units),
l3=F.Linear(n_units, u_units)),
l4=F.Linear(n_units, 1))
optimizer = optimizers.Adam()
optimizer.setup(model.collect_parameters())
x_t,y_t,y_p = [],[],[]
for epoch in six.moves.range(1, n_epoch + 1):
print('epoch', epoch)
perm = np.random.permutation(N)
sum_loss = 0
for i in six.moves.range(0, N, batchsize):
x_batch = xp.asarray(x_train[perm[i:i + batchsize]])
y_batch = xp.asarray(y_train[perm[i:i + batchsize]])
optimizer.zero_grads()
loss,y = forward(x_batch, y_batch)
loss.backward()
optimizer.update()
sum_loss += float(cuda.to_cpu(loss.data)) * len(y_batch)
print "train mean loss = ",sum_loss/N
sum_loss = 0
for i in six.moves.range(0, N_test, batchsize):
x_batch = xp.asarray(x_test[i:i + batchsize])
y_batch = xp.asarray(y_test[i:i + batchsize])
loss, y = forward(x_batch, y_batch, train=False)
#For debugging
#y_t.append(y_batch[0])
#y_p.append(y.data[0])
#x_t.append(x_batch[0])
sum_loss += float(cuda.to_cpu(loss.data)) * len(y_batch)
print "test mean loss is ",sum_loss/N_test
#For debugging
#f = open('sin_pre.csv','ab')
#csvWriter = csv.writer(f)
#csvWriter.writerow(y_p)
#f.close()
#f = open('sin_ans.csv','ab')
#csvWriter = csv.writer(f)
#csvWriter.writerow(y_t)
#f.close()
The first x_train, y_train, x_test, y_test are converted to N * 1 matrix. (To be able to pass it to Chainer's Variable) The rest is almost the same as the mnist sample. The difference is the regression problem, so the Forward () function uses mean_squared_error (mean squared error function) to find the loss. After that, the network configuration is 1-20-20-1.
Execution result(Up to epoch10)
('epoch', 1)
train mean loss = 2553.66754833
test mean loss is 127.272548827
('epoch', 2)
train mean loss = 401.413729346
test mean loss is 5.86524515122
('epoch', 3)
train mean loss = 138.270190761
test mean loss is 4.34996299998
('epoch', 4)
train mean loss = 68.4881465446
test mean loss is 0.659433874475
('epoch', 5)
train mean loss = 38.2469408746
test mean loss is 0.640729590383
('epoch', 6)
train mean loss = 24.6955423482
test mean loss is 0.529370371471
('epoch', 7)
train mean loss = 16.3685227446
test mean loss is 0.505678843091
('epoch', 8)
train mean loss = 11.0349840385
test mean loss is 0.542997811425
('epoch', 9)
train mean loss = 7.98288726631
test mean loss is 0.509733980175
('epoch', 10)
train mean loss = 5.89249175341
test mean loss is 0.502585373718
It seems that I am learning, but the test mean loss is wandering around 0.5 from around epoch exceeding 20, and there is no sign that it will drop further. I don't know if I've fallen into a local solution, the parameters haven't been updated, or simply a mistake in the code.
Reference chainer
Recommended Posts