Convenient library of Tensorflow TF-Slim

Convenient library of Tensorflow TF-Slim

Earlier I wrote about the Tensorflow wrapper library.

MNIST with Keras (TensorFlow backend) MNIST with skflow

~~ Keras seems to be easy to use, but there were some that did not have Deconvolution (Transposed Convolution) etc., so ~~ (If you look closely, you can use Upsampling 2D and Convolution 2D) When I was looking at the Tutorial to study skflow and raw Tensorflow, I found a hidden useful library.

TensorFlow-Slim

It seems that it was added from 0.9 for the Github repository.

Some of the contribs aren't announced very well, but they are nice, so I think it's a good idea to take a look. It says it doesn't officially support it and I don't know what will happen in the future.

tensorflow/tensorflow/contrib/

The following content may change significantly in the future.

Import of TF-Slim

python


import tensorflow as tf
from tensorflow.contrib import slim

Weight initialization

python


weights = slim.variables.variable('weights',
                             shape=[10, 10, 3 , 3],
                             initializer=tf.truncated_normal_initializer(stddev=0.1),
                             regularizer=slim.l2_regularizer(0.05),
                             device='/CPU:0')

It says that it works, but it seems that the method called variable is not implemented at the moment, so you may have to bring what is in inception and import it.

models/inception/inception/

Layer definition

~~ (conv + pool) * 5, fc * 3 layers can be written as follows ~~ As stated in the README, it was VGG16. conv2 pool conv2 pool conv3 pool conv3 pool conv3 pool fc3

python


with slim.arg_scope([slim.ops.conv2d, slim.ops.fc], stddev=0.01, weight_decay=0.0005):
  net = slim.ops.repeat_op(2, inputs, slim.ops.conv2d, 64, [3, 3], scope='conv1')
  net = slim.ops.max_pool(net, [2, 2], scope='pool1')
  net = slim.ops.repeat_op(2, net, slim.ops.conv2d, 128, [3, 3], scope='conv2')
  net = slim.ops.max_pool(net, [2, 2], scope='pool2')
  net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 256, [3, 3], scope='conv3')
  net = slim.ops.max_pool(net, [2, 2], scope='pool3')
  net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 512, [3, 3], scope='conv4')
  net = slim.ops.max_pool(net, [2, 2], scope='pool4')
  net = slim.ops.repeat_op(3, net, slim.ops.conv2d, 512, [3, 3], scope='conv5')
  net = slim.ops.max_pool(net, [2, 2], scope='pool5')
  net = slim.ops.flatten(net, scope='flatten5')
  net = slim.ops.fc(net, 4096, scope='fc6')
  net = slim.ops.dropout(net, 0.5, scope='dropout6')
  net = slim.ops.fc(net, 4096, scope='fc7')
  net = slim.ops.dropout(net, 0.5, scope='dropout7')
  net = slim.ops.fc(net, 1000, activation=None, scope='fc8')
return net

You can also shorten conv * 3 + pool like this

python


net = ...
for i in range(3):
  net = slim.ops.conv2d(net, 256, [3, 3], scope='conv3_' % (i+1))
net = slim.ops.max_pool(net, [2, 2], scope='pool3')

Furthermore, if you use the repeat method provided by slim,

python


net = slim.ops.repeat_op(net, 3, slim.conv2d, 256, [3, 3], scope='conv3')
net = slim.ops.max_pool(net, [2, 2], scope='pool2')

It seems that this will properly adjust the scope to'conv3 / conv3_1','conv3 / conv3_2','conv3 / conv3_3'.

fc * 3 also has the following

python


x = slim.ops.fc(x, 32, scope='fc/fc_1')
x = slim.ops.fc(x, 64, scope='fc/fc_2')
x = slim.ops.fc(x, 128, scope='fc/fc_3')

Can be written in one line

python


slim.stack(x, slim.fully_connected, [32, 64, 128], scope='fc')

Of course, conv also

python


slim.stack(x, slim.ops.conv2d, [(32, [3, 3]), (32, [1, 1]), (64, [3, 3]), (64, [1, 1])], scope='core')

It is written in the README of OK and contrib, but there is no implementation. (Not written in the Inception README)

scope

For example, if there is such a layer of conv * 3

python


padding = 'SAME'
initializer = tf.truncated_normal_initializer(stddev=0.01)
regularizer = slim.losses.l2_regularizer(0.0005)
net = slim.ops.conv2d(inputs, 64, [11, 11], 4,
                      padding=padding,
                      weights_initializer=initializer,
                      weights_regularizer=regularizer,
                      scope='conv1')
net = slim.ops.conv2d(net, 128, [11, 11],
                      padding='VALID',
                      weights_initializer=initializer,
                      weights_regularizer=regularizer,
                      scope='conv2')
net = slim.ops.conv2d(net, 256, [11, 11],
                      padding=padding,
                      weights_initializer=initializer,
                      weights_regularizer=regularizer,
                      scope='conv3')

If you use the scope prepared for slim, you can describe only the part with different arguments and omit the rest.

python


with slim.arg_scope([slim.ops.conv2d], padding='SAME',
                    weights_initializer=tf.truncated_normal_initializer(stddev=0.01)
                    weights_regularizer=slim.losses.l2_regularizer(0.0005)):
  net = slim.ops.conv2d(inputs, 64, [11, 11], scope='conv1')
  net = slim.ops.conv2d(net, 128, [11, 11], padding='VALID', scope='conv2')
  net = slim.ops.conv2d(net, 256, [11, 11], scope='conv3')

In addition, overlap the scope

python


with slim.arg_scope([slim.ops.conv2d, slim.ops.fc],
                    activation_fn=tf.nn.relu,
                    weights_initializer=tf.truncated_normal_initializer(stddev=0.01),
                    weights_regularizer=slim.losses.l2_regularizer(0.0005)):
with arg_scope([slim.ops.conv2d], stride=1, padding='SAME'):
  net = slim.ops.conv2d(inputs, 64, [11, 11], 4, padding='VALID', scope='conv1')
  net = slim.ops.conv2d(net, 256, [5, 5],
                    weights_initializer=tf.truncated_normal_initializer(stddev=0.03),
                    scope='conv2')
  net = slim.ops.fc(net, 1000, activation_fn=None, scope='fc')

After defining what is common to conv and fc, you can define what applies only to conv.

Loss function

This is OK

python


loss = slim.losses.cross_entropy_loss(predictions, labels)

Training

slim.learning is not found in Inception, but exists in slim in contrib.

python


g = tf.Graph()

#Define model and loss function
# ...

total_loss = tf.get_collection(slim.losses.LOSSES_COLLECTION)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)

train_op = slim.learning.create_train_op(total_loss, optimizer)
logdir = './stored_log/'

slim.learning.train(
    train_op,
    logdir,
    number_of_steps=1000,
    save_summaries_secs=300,
    save_interval_secs=600)

Impressions

I feel that it will be quite convenient when it can actually be used. I'm pretty happy that v0.10 can be used normally.

Recommended Posts

Convenient library of Tensorflow TF-Slim
Summary of Tensorflow / Keras
Installation of TensorFlow, a machine learning library from Google
DNN (Deep Learning) Library: Comparison of chainer and TensorFlow (1)
Recommendation of binpacking library of python
Tuning experiment of Tensorflow data
I tried refactoring the CNN model of TensorFlow using TF-Slim
Introducing configure of self-made library
Convenient usage summary of Flask
Install an older version of Tensorflow
Summary of various operations in Tensorflow
Keras as wrapper of Theano & TensorFlow
[Python Queue] Convenient use of Deque
An implementation of ArcFace for TensorFlow
TensorFlow Tutorial-Vector Representation of Words (Translation)