Last time I tried to challenge the image classification of CIFAR-10 using Chainer's new function, trainer, but due to machine power, it works. I couldn't confirm it and it ended. So, this time, I will confirm how to use trainer by creating Autoencoder using MNIST.
Regarding Autoencoder, I referred to this article.
-[Deep learning] Try Autoencoder with Chainer and visualize the result. -Try making a Deep Autoencoder with Chainer
Create a network that takes 1000 MNIST handwritten characters as input and passes through one hidden layer to obtain an output that is equal to the input. The entire code is listed here [https://github.com/trtd56/Autoencoder).
The number of hidden layer units is limited to 64. Also, when called with hidden = True, the hidden layer can be output.
class Autoencoder(chainer.Chain):
def __init__(self):
super(Autoencoder, self).__init__(
encoder = L.Linear(784, 64),
decoder = L.Linear(64, 784))
def __call__(self, x, hidden=False):
h = F.relu(self.encoder(x))
if hidden:
return h
else:
return F.relu(self.decoder(h))
Read MNIST data and create teacher data and test data. I don't need a label for the teacher data, and the output is the same as the input, so I'm tinkering with the shape of the data a bit.
# Read MNIST data
train, test = chainer.datasets.get_mnist()
# Teacher data
train = train[0:1000]
train = [i[0] for i in train]
train = tuple_dataset.TupleDataset(train, train)
train_iter = chainer.iterators.SerialIterator(train, 100)
# Test data
test = test[0:25]
model = L.Classifier(Autoencoder(), lossfun=F.mean_squared_error)
model.compute_accuracy = False
optimizer = chainer.optimizers.Adam()
optimizer.setup(model)
Two points to note here
Definition of loss function When defining a model with L.Classifier, the loss function seems to be softmax_cross_entropy by default, but this time I want to use mean_squared_error, so I have to define it with lossfun.
Do not calculate accuracy This time, we don't use labels for teacher data, so we don't need to calculate accuracy. So you need to set compute_accuracy to False.
I don't think there is any particular need for explanation. Since trainer became available, I've been able to write this part easily, which has helped me ^^
updater = training.StandardUpdater(train_iter, optimizer, device=-1)
trainer = training.Trainer(updater, (N_EPOCH, 'epoch'), out="result")
trainer.extend(extensions.LogReport())
trainer.extend(extensions.PrintReport( ['epoch', 'main/loss']))
trainer.extend(extensions.ProgressBar())
trainer.run()
Create a function and plot the result with matplotlib. The original label is printed in red at the top of the image. Since the coordinates are not adjusted properly, there are some parts that are covered ...
By the way, if you enter the test data as it is into this function, the image of the original data will be output.
def plot_mnist_data(samples):
for index, (data, label) in enumerate(samples):
plt.subplot(5, 5, index + 1)
plt.axis('off')
plt.imshow(data.reshape(28, 28), cmap=cm.gray_r, interpolation='nearest')
n = int(label)
plt.title(n, color='red')
plt.show()
pred_list = []
for (data, label) in test:
pred_data = model.predictor(np.array([data]).astype(np.float32)).data
pred_list.append((pred_data, label))
plot_mnist_data(pred_list)
Let's see how it changes as we increase epoch.
16 images including all 0-9. Let's look at these 16 types of changes.
epoch = 1
It's like a sandstorm on TV and I don't know what it is at this point.
epoch = 5
I've finally seen something like a number, but I still don't know it.
epoch = 10
The shapes of 0, 1, 3 etc. are gradually becoming visible. The 6 in the second row is still crushed and I'm not sure.
epoch = 20
I can almost see the numbers.
epoch = 100
I tried to advance to 100 at once. The shape of 6 in the second row, which was almost crushed, is now visible. If you add more epoch, you can see it clearly, but this time it is up to here.
It was fun to watch the network recognize numbers as numbers. ~~ trainer is convenient, but be careful because various parts such as the loss function are automatically determined. ~~ (Fixed on 2016.08.10) It was the Classifer spec, not the trainer, that the loss function defaulted to soft_max_cross_entropy. The loss function is specified when defining the updater used in trainer, but usually the one set in optimizer seems to be linked.
Recommended Posts