It is a memo that I have left behind
Concept of graphs, arithmetic nodes, and tensors
A figure representing one layer consisting of multiple neurons φ(X * W + b)
Consists of nodes and edges connecting the nodes There are arithmetic nodes, variable nodes, press holder nodes, etc.
Amount flowing through the graph
A tensor is an n-dimensional array or list
Variable tf.Variable Matrix multiplication tf.matmul Application of φ tf.nn.relu
graph.py
#Definition of variable to put W
weights = tf.Variable()
#Definition of variable to put b
bias = tf.Variable()
#The function of the layer φ(X*W+b)Defined in
#Here φ uses relu
#images is the input that this layer receives
#hidden1 is the output of this layer
#1st layer
hidden1 = tf.nn.relu(tf.matmul(images, weights) + bias)
#2nd layer
hidden2 = tf.nn.relu(tf.matmul(hidden1,weights) + bias)
#images, weights, bias,All hidden1 are tensors
init.py
w = tf.Variable(tf.random_norml([784, 200], stddev = 0.35), name = "weights")
b =tf.Variable(tf.zeros([200], name = "biases")
#Operation to initialize this variable
#Caution! It hasn't been executed yet, just a node has been added.
init_op = tf.initialize_all_variables()
#Call this initialization after launching the model
#The defined model works for the first time in Session.
#Use run to call
with tf.Session() as sess:
# Run the init operation.
sess.run(init_op)
** Saving and restoring variables
save.py
#Create a variable
v1 = tf.variable(..., name = "v1")
v2 = tf.variable(..., name = "v2")
#Initialize variables init_op node
init_op = tf.initalize_all_variables()
#Save all variables,Add save node to restore
saver = tf.train.Saver()
#After launching the model, initializing variables, doing some work
#Save variables to disk
with tf.Session() as sess:
sess.run(init_op)
#Do something with the model
##########
#Save variables to disk
save_path = saver.save(sess, "/tmp/model.ckpt")
print("Modef saved in file: %s" % save_path)
#Variable restore
saver.retore(sess, "/tmp/model.ckpt")
print ("Modell restored")
GradientDescentOptimizer() Optimization operation for parameter optimization. Optimize the loss function with this Optimizer
opt.py
###For numerical prediction###
loss = tf.reduce_mean(tf.square(y - y_data))
#Learning rate 0.Gradient descent at 5
optimizer = tf.train.GradientDescentOptimizer(0.5)
train = optimizer.minimize(loss)
###In case of classification###
y_ = tf.placeholder("float", [None , 10])
cross_enttopy = -tf.reduce_sum(y_ * tf.log(y))
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(cross_entropy)
Recommended Posts