The usage of TensorBoard has changed slightly

Overview

It's just now, but I tried moving TensorBoard for a while. At that time, the page The most basic usage of TensorBoard was very helpful. The best thing was that the sample was very simple and easy to try.

However, as you can see in the comments, it seems that there are various differences depending on the version, and it is the content that I tried running it in the latest environment. So, after looking at the Omoto page, I hope you can see this as a reference, but since the basics are just a copy and paste of the Omoto page, it may not be necessary to refer to this. Maybe. Reprint: The most basic usage of TensorBoard

environment


$ pip list | grep tens
tensorboard          1.14.0
tensorflow           1.14.0
tensorflow-estimator 1.14.0

When,
tensorboard          2.0.1
tensorflow           2.0.0
tensorflow-estimator 2.0.1

v1 series (1.14)

#Import required libraries
import tensorflow as tf
import numpy as np

#Variable definition
dim = 5
LOGDIR = './logs'
x = tf.compat.v1.placeholder(tf.float32, [None, dim + 1], name='X')
w = tf.Variable(tf.zeros([dim+1,1]), name='weight')
y = tf.matmul(x,w, name='Y')
t = tf.compat.v1.placeholder(tf.float32, [None, 1], name='TEST')
sess = tf.compat.v1.Session()

#Definition of loss function and learning method
loss = tf.reduce_sum(tf.square(y - t))
train_step = tf.compat.v1.train.AdamOptimizer().minimize(loss)

#Define variables to track with TensorBoard
with tf.name_scope('summary'):
	#Use return
	loss_summary = tf.compat.v1.summary.scalar('loss', loss)
	if tf.io.gfile.exists(LOGDIR):
		tf.io.gfile.rmtree(LOGDIR) # ./Delete if logdir exists
	writer = tf.compat.v1.summary.FileWriter(LOGDIR, sess.graph)

#Session initialization and input data preparation
sess.run(tf.compat.v1.global_variables_initializer())

train_t = np.array([5.2, 5.7, 8.6, 14.9, 18.2, 20.4,25.5, 26.4, 22.8, 17.5, 11.1, 6.6])
train_t = train_t.reshape([12,1])
train_x = np.zeros([12, dim+1])
for row, month in enumerate(range(1, 13)):
	for col, n in enumerate(range(0, dim+1)):
		train_x[row][col] = month**n

#Learning

i = 0
for _ in range(100000):
	i += 1
	sess.run(train_step, feed_dict={x: train_x, t: train_t})
	if i % 10000 == 0:
		#Loss obtained above_Pass summery
		s, loss_val = sess.run([loss_summary, loss] , feed_dict={x: train_x, t: train_t})
		print('Step: %d, Loss: %f' % (i, loss_val))
		#This will output SCALARS
		writer.add_summary(s, global_step=i)

--GRAPHS image GRAPHS

It will be easier to understand if you give it a name by name.

--SCALARS graph SCALARSのグラフ

By the way, if you make it look like the following, it seems that you do not have to change the existing code so much. Since it is described in the tensorflow document, this is an appropriate description method.

import tensorflow.compat.v1 as tf

v2 series (2.00)

So, let's change the writing style here. Also, the point of v2 is that the way of writing Variable has changed. Also, what I don't really understand is the Eager Tensors part. For the time being, I specified session as with.


#Import required libraries
import tensorflow.compat.v1 as tf
import numpy as np

#Variable definition
dim = 5
LOGDIR = './logs'
with tf.Session() as sess:
	x = tf.placeholder(tf.float32, [None, dim + 1], name='X')
	with tf.variable_scope('weight'):
		w = tf.get_variable("weight", shape=[dim+1,1], initializer=tf.zeros_initializer())
	y = tf.matmul(x,w, name='Y')
	t = tf.placeholder(tf.float32, [None, 1], name='TEST')

	#Definition of loss function and learning method
	loss = tf.reduce_sum(tf.square(y - t))
	train_step = tf.train.AdamOptimizer().minimize(loss)

	#Define variables to track with TensorBoard
	with tf.name_scope('summary'):
		loss_summary = tf.summary.scalar('loss', loss)
		if tf.io.gfile.exists(LOGDIR):
			tf.io.gfile.rmtree(LOGDIR) # ./Delete if logdir exists
		writer = tf.summary.FileWriter(LOGDIR, sess.graph)

	#Session initialization and input data preparation
	sess.run(tf.global_variables_initializer())

	train_t = np.array([5.2, 5.7, 8.6, 14.9, 18.2, 20.4,25.5, 26.4, 22.8, 17.5, 11.1, 6.6])
	train_t = train_t.reshape([12,1])
	train_x = np.zeros([12, dim+1])
	for row, month in enumerate(range(1, 13)):
		for col, n in enumerate(range(0, dim+1)):
			train_x[row][col] = month**n

	#Learning

	i = 0
	for _ in range(100000):
		i += 1
		sess.run(train_step, feed_dict={x: train_x, t: train_t})
		if i % 10000 == 0:
			s, loss_val = sess.run([loss_summary, loss] , feed_dict={x: train_x, t: train_t})
			print('Step: %d, Loss: %f' % (i, loss_val))
			writer.add_summary(s, global_step=i)

--SCALARS graph

It seems that the Y part is different.

SCALARSのグラフ

Recommended Posts

The usage of TensorBoard has changed slightly
Note that the latest link of ius has changed
The specifications of pytz have changed
Scraping the usage history of the community cycle
Display the graph of tensorBoard on jupyter
Visualized the usage status of the sink in the company
Organize the super-basic usage of Autotools and pkg-config
Note that the method of publishing modules to PyPI has changed in various ways.
Scraping the usage history of the community cycle PhantomJS version
Python --Explanation and usage summary of the top 24 packages
Make a note of the list of basic Pandas usage
The site of "OpenCV-Python tutorial document" has been launched.
[Introduction to Python] Basic usage of the library matplotlib
AttributeError: The story of solving module'tensorflow' has no attribute'log'.
Roughly estimate the total memory usage of an object
The beginning of cif2cell
The meaning of self
Summary of pyenv usage
Basic usage of flask-classy
Usage of Python locals ()
the zen of Python
Basic usage of SQLAlchemy
Revenge of the Types: Revenge of types
I just changed the sample source of Python a little.
Get the id of a GPU with low memory usage