I will try Getting Started With TensorFlow on the official page and explain the points. After understanding this content, it is quick to understand if you do the tutorial of "[Explanation for beginners] TensorFlow tutorial MNIST (for beginners)". I was out of focus).
-Installing TensorFlow on Windows was easy even for Python beginners -[Explanation for beginners] TensorFlow tutorial MNIST (for beginners) -Visualize TensorFlow tutorial MNIST (for beginners) with TensorBoard -TensorFlow API memo -[Introduction to TensorBoard] Visualize TensorFlow processing to deepen understanding -[Introduction to TensorBoard: image] TensorFlow Visualize image processing to deepen understanding -[Introduction to TensorBoard: Projector] Make TensorFlow processing look cool -[Explanation for beginners] TensorFlow Tutorial Deep MNIST -Yuki Kashiwagi's facial features to understand TensorFlow [Part 1]
TensorFlow This is an open source library created for machine learning by Google, which is well known. TensorFlow's ** "Tensor" is simply a multidimensional array ** (a general term, not Google-specific). For more information on Tensor, see Tensors you should know before starting TensorFlow (Addition: To more general topics) The article is great. TensorFlow is a good library for working with Tensor. And we are using CPU and GPU as effectively as possible to optimize for machine learning. For example, if you run the MNIST deep learning tutorial on a 2-core PC, it will almost completely use up the CPU as shown in the figure below.
I will explain the concept while looking at the basic syntax of TensorFlow.
Computational Graph TensorFlow is based on the concept of Computational Graph, and commands are divided into two categories: building and executing Computational Graph.
For example, if you define two constants and output them, it will be as follows in C language (it may be a little different because it is an old memory, but feel it like that. ).
const double node1 = 3.0;
const double node2 = 4.0;
printf("%f, %f", node1, node2);
If you write it in TensorFlow as if it were a normal language, it will be as follows.
node1 = tf.constant(3.0, dtype=tf.float32)
node2 = tf.constant(4.0) # also tf.float32 implicitly
print(node1, node2)
However, what is output above is the result below. The constants 3.0 and 4.0 are not output. This is because the above syntax is ** "Building a Computational Graph" **. ** Think of "Computational Graph" as a processing plan that considers parallelism and processing order **.
Tensor("Const:0", shape=(), dtype=float32) Tensor("Const_1:0", shape=(), dtype=float32)
To actually output the constant value, you need to ** "Run Computational Graph" ** with the following syntax.
sess = tf.Session()
print(sess.run([node1, node2]))
Now, finally 3.0 and 4.0 are output as below.
[3.0, 4.0]
It looks like this when illustrated.
Compared with normal language processing, it looks like the figure below. It is characterized by two steps of construction (planning) and execution by Computational Graph. If it is a process that simply registers records in the DB, normal language processing that uses only one CPU is simpler, but if it is complicated and has a large parallelization merit such as machine learning or deep learning, TensorFlow would be more appropriate. In my experience, Spark, Hadoop, SAP HANA, etc. can be processed in parallel in a form similar to Computational Graph.
Recommended Posts