I tried TensorFlow Official Tutorial It's just a simple calculation graph. Make sure you have TensorFlow installed.
Run python
in the terminal (hereafter in the interpreter)
>>>import tensorflow as tf
>>>node1 = tf.constant(3.0, dtype=tf.float32) #Constant 3.Set to 0
>>>node2 = tf.constant(4.0)
>>>print(node1, node2)
Nodes generate 3.0 and 4.0 when evaluated. To actually evaluate a node, you need to run a computational graph within the session.
print output
Tensor("Const:0", shape=(), dtype=float32) Tensor("Const_1:0", shape=(), dtype=float32)
Create a Session object and execute the calculation graph (node1, node2) to evaluate by calling the run method.
>>>sess = tf.Session()
>>>print(sess.run([node1, node2]))
print output
[3.0, 4.0]
A complex calculation (new calculation graph) is created by combining these nodes (node1, node2). Add (create addition) for the time being
>>>node3 = tf.add(node1, node2) # node1 + node2
>>>print("node3:", node3)
>>>print("sess.run(node3):", sess.run(node3))
print output
node3: Tensor("Add:0", shape=(), dtype=float32)
sess.run(node3): 7.0
Since constants are set in this graph as it is, use placeholder
to accept external input.
>>>a = tf.placeholder(tf.float32)
>>>b = tf.placeholder(tf.float32)
>>>adder_node = a + b # + provides a shortcut for tf.add(a, b)
>>>print(sess.run(adder_node, {a: 3, b: 4.5}))
>>>print(sess.run(adder_node, {a: [1, 3], b: [2, 4]}))
print output
7.5
[ 3. 7.]
Let's make the calculation graph more complicated.
>>>add_and_triple = adder_node * 3.
>>>print(sess.run(add_and_triple, {a: 3, b: 4.5}))
When I actually convert it into a mathematical formula, it does the following calculation. (a + b) * 3
print output
22.5
Machine learning needs to be able to modify the graph to get new output with the same input. Variables allow you to add trainable parameters to your graph.
When tf.constant
is called, it is initialized and a constant is set, so it cannot be changed.
tf.Variable
can be updated without initializing the variable even if the variable is called.
>>>W = tf.Variable([.3], dtype=tf.float32)
>>>b = tf.Variable([-.3], dtype=tf.float32)
>>>x = tf.placeholder(tf.float32)
>>>linear_model = W * x + b
>>>init = tf.global_variables_initializer()
>>>sess.run(init)
>>>print(sess.run(linear_model, {x: [1, 2, 3, 4]}))
ʻInit = .. ~ is a variable for initializing all variables in the TensorFlow program. Variables are not initialized until you call
sess.run (init)`
print output
[ 0. 0.30000001 0.60000002 0.90000004]
We need a placeholder to provide the value of the desired value (teacher data), and we also need a loss function.
Measures how far the value output by the current model is from the teacher data. Use the standard loss model for linear regression of the values output by the model and the teacher data. linear_model --y
calculates the vector of the error that each element tyers, and squares the error with tf.square
. Then call tf.reduce_sum
to generate a single scalar that abstracts all the errors.
>>>y = tf.placeholder(tf.float32)
>>>squared_deltas = tf.square(linear_model - y)
>>>loss = tf.reduce_sum(squared_deltas)
>>>print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
The output of print returns the value of the loss function.
print output
23.66
{x: [1, 2, 3, 4], y: [0, -1, -2, -3]}
Looking at the two values of the input, when W = -1 b = 1
The loss is likely to be zero.
You can use tf.assign
to change the weights and biases.
>>>fixW = tf.assign(W, [-1.])
>>>fixb = tf.assign(b, [1.])
>>>sess.run([fixW, fixb])
>>>print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
print output
0.0
Recommended Posts