Record of TensorFlow mnist expert edition (Visualization of TensorBoard)

Introduction

Preferred Networks's Distributed Deep Reinforcement Learning I want to make something like this by watching the news! I started Deep Learning. This is ikki. The other day, I went to the Robot Exhibition and exhibited by Preferred Networks. I've participated in things and workshops, and it's wonderful that the technology is fast and the way of thinking is good. I respect you. Then use Chainer! There is an opinion, but that's it. I'm thinking of studying hard.

This time, I will record (I'm forgetting) what I did when I started doing TensorFlow the other day. First of all, I thought about the installation method, but I got tired of it on the way, so I stopped. It's well organized elsewhere. By the way, I referred to this. Thank you very much. "I installed TensorFlow (GPU version) on Ubuntu"

MNIST Tutorial ~ Expert ~

I tried Deep MNIST for Experts. However, many of these tutorials are excellent, so it's almost like writing notes. I am happy to post my program. It's almost a copy. This time, I referred to this. Thank you very much. "Build a classifier with a handwriting recognition rate of 99.2% with a TensorFlow convolutional neural network"

mnist_expert.py


#!/usr/bin/env python
# -*- coding: utf-8 -*-

####################################################################
#Implement mnist with tensorflow
#It is a little difficult to see because the code is not divided.
####################################################################

from __future__ import absolute_import,unicode_literals
import input_data
import tensorflow as tf

#Read mnist data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)

# cross_Implement entropy
sess = tf.InteractiveSession()      #Start an interactive session(Not suitable for long-term programs)
#Variable settings used in expressions
x = tf.placeholder("float", shape=[None,784])       #input
y_ = tf.placeholder("float", shape=[None,10])       #Variables for error functions True class Distribution
W = tf.Variable(tf.zeros([784,10]))     #weight
b = tf.Variable(tf.zeros([10]))         #bias
sess.run(tf.initialize_all_variables())     #Variable initialization(Must be required when using variables)
y = tf.nn.softmax(tf.matmul(x,W)+b)     # y=softmax(Wx+b)Differentiation is also done without permission
cross_entropy = -tf.reduce_sum(y_*tf.log(y))        #Define cross entropy

#Learning algorithms and minimization problems
#The steepest descent method here(Gradient descent)Solve the minimization of. Learning rate 0.01
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy)

#Learn 1000 times Batch size 50(The processing of this content is still insufficiently studied)
for i in range(1000):
    batch = mnist.train.next_batch(50)
    train_step.run(feed_dict={x:batch[0],y_:batch[1]})

#View results
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1))#Plausible Class or argmax(y)Is equal to the teacher label
accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float"))#Evaluate each sample and calculate the average
print accuracy.eval(feed_dict={x: mnist.test.images,y_: mnist.test.labels})#Image y on x_Substitute that label for
#The result that comes out is accurate 91 at this point%Before and after

###############################################################
#The deep convolutional neural network is constructed below.
#Accuracy 99 by deepening%It aims to.
###############################################################

#Weight and bias initialization
#A function that initializes the weights with a small amount of noise due to the vanishing gradient problem? What do you mean?
def weight_variable(shape):     #Weight initialization
    initial = tf.truncated_normal(shape,stddev=0.1)
    return tf.Variable(initial)

def bias_variable(shape):       #Bias initialization
    initial = tf.constant(0.1,shape=shape)
    return tf.Variable(initial)


#Definition of convolution and pooling
def conv2d(x,W):
    return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding='SAME')

def max_pool_2x2(x):
    return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1],padding='SAME')


#Calculate 32 features with 1st layer 5x5 patch
# [5,5,1,32]Is 5,5 for patch size, 1 for number of input channels, 32 for output channels
W_conv1 = weight_variable([5,5,1,32])   #Variable definition
b_conv1 = bias_variable([32])           #Variable definition
x_image = tf.reshape(x,[-1,28,28,1])    #28 images*Convert to 28 monochrome images

h_conv1 = tf.nn.relu(conv2d(x_image,W_conv1)+b_conv1)   #ReLU processing?

h_pool1 = max_pool_2x2(h_conv1)#Creation of pooling layer 1


#Calculate 64 features with 2nd layer 5x5 patch
W_conv2 = weight_variable([5,5,32,64])  #Variable definition
b_conv2 = bias_variable([64])           #Variable definition

h_conv2 = tf.nn.relu(conv2d(h_pool1,W_conv2)+b_conv2)

h_pool2 = max_pool_2x2(h_conv2)#Creation of pooling layer 2


#Conversion to fully connected layer (full connection layer)
W_fc1 = weight_variable([7*7*64,1024])  #What kind of conversion?
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2,[-1,7*7*64])

h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1)+b_fc1)


#Do Dropout
keep_prob = tf.placeholder("float")

h_fc1_drop = tf.nn.dropout(h_fc1,keep_prob)


#Read layer
W_fc2 = weight_variable([1024,10])
b_fc2 = bias_variable([10])

y_conv = tf.nn.softmax(tf.matmul(h_fc1_drop,W_fc2)+b_fc2)


#Model learning and evaluation
cross_entropy = -tf.reduce_sum(y_*tf.log(y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)#Uses the Adam method.

correct_prediction = tf.equal(tf.argmax(y_conv,1),tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction,"float"))  # cast?
sess.run(tf.initialize_all_variables())
for i in range(20000):
    batch = mnist.train.next_batch(50)
    if i % 100 == 0:
        train_accuracy = accuracy.eval(feed_dict={x: batch[0], y_: batch[1], keep_prob:1.0})
        print "step %d, training accuracy %g" % (i,train_accuracy)
    train_step.run(feed_dict={x:batch[0],y_:batch[1],keep_prob:0.5})# 0.I keep it to 5?

#Result display
print "test accuracy %g" % accuracy.eval(feed_dict={x:mnist.test.images,y_:mnist.test.labels,keep_prob:1.0})

Hmmm, I passed the program, but is it difficult to use TensorBoard, which will be the killer app of TensorFlow as it is? Or rather, it's hard to see as a program.

MNIST Tutorial ~ Improved ~

So, I tried to improve it. Thank you for your help this time. I think this person is young but really excellent and always makes interesting things. Thank you very much. "Kivantium activity diary: Identify the production company of anime Yuruyuri with TensorFlow"

mnist_expert_kai.py


#!/usr/bin/env python
# -*- coding: utf-8 -*-

####################################################################
#Implement mnist with tensorflow
#It is a little difficult to see because the code is not divided.(->Repaired)
####################################################################
import sys
import cv2
import numpy as np
import tensorflow as tf
import tensorflow.python.platform
import input_data

#Read mnist data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)

NUM_CLASSES = 10    #Number of model classes

def inference(images_placeholder, keep_prob):
    ####################################################################
    #Functions that create predictive models
    #argument: 
    #  images_placeholder:Image placeholder
    #  keep_prob:dropout rate placeholder
    #Return value:
    #  y_conv:Probability of each class(Something like)
    ####################################################################

    #Weight with standard deviation 0.Initialized with a normal distribution of 1
    def weight_variable(shape):
      initial = tf.truncated_normal(shape, stddev=0.1)
      return tf.Variable(initial)

    #Bias standard deviation 0.Initialized with a normal distribution of 1
    def bias_variable(shape):
      initial = tf.constant(0.1, shape=shape)
      return tf.Variable(initial)

    #Creating a convolution layer
    def conv2d(x, W):
      return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

    #Creating a pooling layer
    def max_pool_2x2(x):
      return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1], padding='SAME')
    
    #Transform input to 28x28x1
    x_image = tf.reshape(images_placeholder, [-1, 28, 28, 1])

    #Creation of convolution layer 1
    with tf.name_scope('conv1') as scope:
        W_conv1 = weight_variable([5, 5, 1, 32])
        b_conv1 = bias_variable([32])
        h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)

    #Creation of pooling layer 1
    with tf.name_scope('pool1') as scope:
        h_pool1 = max_pool_2x2(h_conv1)
    
    #Creation of convolution layer 2
    with tf.name_scope('conv2') as scope:
        W_conv2 = weight_variable([5, 5, 32, 64])
        b_conv2 = bias_variable([64])
        h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)

    #Creation of pooling layer 2
    with tf.name_scope('pool2') as scope:
        h_pool2 = max_pool_2x2(h_conv2)

    #Creation of fully connected layer 1
    with tf.name_scope('fc1') as scope:
        W_fc1 = weight_variable([7*7*64, 1024])
        b_fc1 = bias_variable([1024])
        h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
        h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
        #dropout settings
        h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

    #Creation of fully connected layer 2
    with tf.name_scope('fc2') as scope:
        W_fc2 = weight_variable([1024, NUM_CLASSES])
        b_fc2 = bias_variable([NUM_CLASSES])

    #Normalization with softmax function
    with tf.name_scope('softmax') as scope:
        y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)

    #Returns something like the probability of each label
    return y_conv


def loss(logits, labels):
    ####################################################################
    #Function to calculate loss
    #argument:
    #  logits:Logit tensor, float - [batch_size, NUM_CLASSES]
    #  labels:Label tensor, int32 - [batch_size, NUM_CLASSES]
    #Return value:
    #  cross_entropy:Cross entropy tensor, float
    ####################################################################

    #Calculation of cross entropy
    cross_entropy = -tf.reduce_sum(labels*tf.log(logits))
    #Specify to display in TensorBoard
    tf.scalar_summary("cross_entropy", cross_entropy)
    return cross_entropy


def training(loss, learning_rate):
    ####################################################################
    #Functions that define training ops
    #argument:
    #  loss:Loss tensor, loss()Result of
    #  learning_rate:Learning coefficient
    #Return value:
    #  train_step:Training op
    ####################################################################

    train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss)
    return train_step


def accuracy(logits, labels):
    ####################################################################
    #Correct answer rate(accuracy)Function to calculate
    #argument: 
    #  logits: inference()Result of
    #  labels:Label tensor, int32 - [batch_size, NUM_CLASSES]
    #Return value:
    #  accuracy:Correct answer rate(float)
    ####################################################################

    correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    tf.scalar_summary("accuracy", accuracy)
    return accuracy


if __name__ == '__main__':
    #Variable settings used in expressions
    x_image = tf.placeholder("float", shape=[None,784])       #input
    y_label = tf.placeholder("float", shape=[None,10])       
    W = tf.Variable(tf.zeros([784,10]))     #weight
    b = tf.Variable(tf.zeros([10]))         #bias
    #y_label = tf.nn.softmax(tf.matmul(x_image,W)+b)     # y=softmax(Wx+b)Differentiation is also done without permission
    keep_prob = tf.placeholder("float")
    #init_op = tf.initialize_all_variables()    #Variable initialization(Must be required when using variables)

    with tf.Session() as sess:
        logits = inference(x_image,keep_prob)   # inference()To create a model
        loss_value = loss(logits,y_label)       # loss()To calculate the loss
        train_op = training(loss_value,1e-4)    # training()Call and train (1e-4 is the learning rate)
        accur = accuracy(logits,y_label)     # accuracy()To calculate the accuracy
        init_op = tf.initialize_all_variables()
        sess.run(init_op)

        for step in range(20001):
            batch = mnist.train.next_batch(50)
            if step % 100 == 0:
                train_accur = accur.eval(feed_dict={x_image: batch[0], y_label: batch[1], keep_prob:1.0})
                print "step %d, training accuracy %g" % (step,train_accur)
            train_op.run(feed_dict={x_image:batch[0],y_label:batch[1],keep_prob:0.5})# 0.I keep it to 5?

        #Result display
        print "test accuracy %g" % accur.eval(feed_dict={x_image:mnist.test.images,y_label:mnist.test.labels,keep_prob:1.0})

I referred to it because it is better to divide it like inference (), loss (), training (), accuracy (). There were two problems here. The first point is init_op in main. An error occurs when init_op is defined outside Session (). I saw somewhere that it passed in Session (), so I rewrote it in Session (). The second point is the final result display part. I got the error ** Out of GPU Memory ** that I wrote above. In this case, I solved it by rewriting convolutional.py here (Github TensorFlow # 157). I don't understand why both errors have been fixed. .. .. I can speak English ...

MNIST Tutorial ~ For TensorBoard ~

So, the following is an improved version of the above program so that it can be seen on TensorBoard.

mnist_expert_kai2.py


#!/usr/bin/env python
# -*- coding: utf-8 -*-

####################################################################
#Implement mnist with tensorflow
#It is a little difficult to see because the code is not divided.(->Repaired)
####################################################################
import sys
import cv2
import numpy as np
import tensorflow as tf
import tensorflow.python.platform
import input_data

#Read mnist data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)

NUM_CLASSES = 10    #Number of model classes

def inference(images_placeholder, keep_prob):
    ####################################################################
    #Functions that create predictive models
    #argument: 
    #  images_placeholder:Image placeholder
    #  keep_prob:dropout rate placeholder
    #Return value:
    #  y_conv:Probability of each class(Something like)
    ####################################################################

    #Weight with standard deviation 0.Initialized with a normal distribution of 1
    def weight_variable(shape):
      initial = tf.truncated_normal(shape, stddev=0.1)
      return tf.Variable(initial)

    #Bias standard deviation 0.Initialized with a normal distribution of 1
    def bias_variable(shape):
      initial = tf.constant(0.1, shape=shape)
      return tf.Variable(initial)

    #Creating a convolution layer
    def conv2d(x, W):
      return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')

    #Creating a pooling layer
    def max_pool_2x2(x):
      return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1], padding='SAME')
    
    #Transform input to 28x28x1
    x_images = tf.reshape(images_placeholder, [-1, 28, 28, 1])

    #Creation of convolution layer 1
    with tf.name_scope('conv1') as scope:
        W_conv1 = weight_variable([5, 5, 1, 32])
        b_conv1 = bias_variable([32])
        h_conv1 = tf.nn.relu(conv2d(x_images, W_conv1) + b_conv1)

    #Creation of pooling layer 1
    with tf.name_scope('pool1') as scope:
        h_pool1 = max_pool_2x2(h_conv1)
    
    #Creation of convolution layer 2
    with tf.name_scope('conv2') as scope:
        W_conv2 = weight_variable([5, 5, 32, 64])
        b_conv2 = bias_variable([64])
        h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)

    #Creation of pooling layer 2
    with tf.name_scope('pool2') as scope:
        h_pool2 = max_pool_2x2(h_conv2)

    #Creation of fully connected layer 1
    with tf.name_scope('fc1') as scope:
        W_fc1 = weight_variable([7*7*64, 1024])
        b_fc1 = bias_variable([1024])
        h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
        h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
        #dropout settings
        h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)

    #Creation of fully connected layer 2
    with tf.name_scope('fc2') as scope:
        W_fc2 = weight_variable([1024, NUM_CLASSES])
        b_fc2 = bias_variable([NUM_CLASSES])

    #Normalization with softmax function
    with tf.name_scope('softmax') as scope:
        y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)

    #Returns something like the probability of each label
    return y_conv


def loss(logits, labels):
    ####################################################################
    #Function to calculate loss
    #argument:
    #  logits:Logit tensor, float - [batch_size, NUM_CLASSES]
    #  labels:Label tensor, int32 - [batch_size, NUM_CLASSES]
    #Return value:
    #  cross_entropy:Cross entropy tensor, float
    ####################################################################

    #Calculation of cross entropy
    cross_entropy = -tf.reduce_sum(labels*tf.log(logits))
    #Specify to display in TensorBoard
    tf.scalar_summary("cross_entropy", cross_entropy)
    return cross_entropy


def training(loss, learning_rate):
    ####################################################################
    #Functions that define training ops
    #argument:
    #  loss:Loss tensor, loss()Result of
    #  learning_rate:Learning coefficient
    #Return value:
    #  train_step:Training op
    ####################################################################

    train_step = tf.train.AdamOptimizer(learning_rate).minimize(loss)
    return train_step


def accuracy(logits, labels):
    ####################################################################
    #Correct answer rate(accuracy)Function to calculate
    #argument: 
    #  logits: inference()Result of
    #  labels:Label tensor, int32 - [batch_size, NUM_CLASSES]
    #Return value:
    #  accuracy:Correct answer rate(float)
    ####################################################################

    correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    tf.scalar_summary("accuracy", accuracy)
    return accuracyd


if __name__ == '__main__':
    with tf.Graph().as_default():
        x_image = tf.placeholder("float", shape=[None,784])       #input
        y_label = tf.placeholder("float", shape=[None,10])       #Variables for error functions True class Distribution
        W = tf.Variable(tf.zeros([784,10]))     #weight
        b = tf.Variable(tf.zeros([10]))         #bias
        #y_label = tf.nn.softmax(tf.matmul(x_image,W)+b)     # y=softmax(Wx+b)Differentiation is also done without permission
        keep_prob = tf.placeholder("float")
        #init_op = tf.initialize_all_variables()    #Variable initialization(Must be required when using variables)
        logits = inference(x_image,keep_prob)   # inference()To create a model
        loss_value = loss(logits,y_label)       # loss()To calculate the loss
        train_op = training(loss_value,1e-4)    # training()Call and train (1e-4 is the learning rate)
        accur = accuracy(logits,y_label)     # accuracy()To calculate the accuracy
        init_op = tf.initialize_all_variables()
        sess = tf.Session()
        sess.run(init_op)
        #Setting the value to be displayed on TensorBoard
        summary_op = tf.merge_all_summaries()
        summary_writer = tf.train.SummaryWriter('/tmp/mnist_data', sess.graph_def)
        
        #Execution of training
        for step in range(20001):
            batch = mnist.train.next_batch(50)
            if step % 100 == 0:
                train_accur = sess.run(accur,feed_dict={x_image: batch[0], y_label: batch[1], keep_prob:1.0})
                print "step %d, training accuracy %g" % (step,train_accur)
            sess.run(train_op,feed_dict={x_image:batch[0],y_label:batch[1],keep_prob:0.5})# 0.I keep it to 5?
            #Add a value to be displayed on the TensorBoard after each step
            summary_str = sess.run(summary_op, feed_dict={
                x_image: batch[0],
                y_label: batch[1],
                keep_prob: 1.0})
            summary_writer.add_summary(summary_str, step)

        #Result display
        print "test accuracy %g"%sess.run(accur, feed_dict={
                x_image:mnist.test.images,
                y_label:mnist.test.labels,
                keep_prob:1.0})

The only change is the contents of main. It starts with with tf.Graph (). As_default () :. In addition, there are various changes such as summary, but I can not write it because I do not understand enough to explain. .. .. I'm sorry. It works for the time being.

After that, execute it in Terminal as follows. This time the graph is in'/ tmp / mnist_data' tensorboard --logdir /tmp/mnist_data/ If you connect to http: // localhost: 6006 / on Google Chrome, you can see some cool graphs.

There are still many parts that I haven't caught up with, so if you have any problems, please comment.

Recommended Posts

Record of TensorFlow mnist expert edition (Visualization of TensorBoard)
From running MINST on TensorFlow 2.0 to visualization on TensorBoard (2019 edition)
Visualization of the firing state of the hidden layer of the model learned in the TensorFlow MNIST tutorial
Visualization of CNN feature maps and filters (Tensorflow 2.0)
Code for TensorFlow MNIST Begginer / Expert with Japanese comments
I tried the MNIST tutorial for beginners of tensorflow.