Beginners read "Introduction to TensorFlow 2.0 for Experts"

What to do: point_up:

Read tensorflow 2.0 Tutorial. I would like to investigate and supplement what I did not understand in the tutorial as a memorandum. So it may be good to read it side by side with the tutorial.

background

I used to use chainer until now When I was about to learn tensorflow, the mainstream had moved to 2.0 before I knew it. I don't even know the tensorflow1 series, but if I do it now, I think it makes sense to start with the 2nd series.

environment

windows10 I did it in the virtual environment of anaconda Environment construction is as follows

#python3.It seems that it only supports up to 6, so the version is 3.6
conda create -n tensorflow2.0 python=3.6 anaconda
conda install tensorflow==2.0.0
conda install jupyter

Commentary

First, import the TensorFlow library into your program.

from __future__ import absolute_import, division, print_function, unicode_literals

!pip install -q tensorflow-gpu==2.0.0-rc1
import tensorflow as tf

from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model

Load and prepare the MNIST dataset.

mnist = tf.keras.datasets.mnist

(x_train, y_train), (x_test, y_test) = mnist.load_data()

#Pixel value (0~255) to 0~Push into 1
x_train, x_test = x_train / 255.0, x_test / 255.0

# tf.It seems that the number of dimensions can be taken out with new axis
#Add dimension number information to each data (each image)
#CNN seems to need dimensional information. It may be a process that should not be done in reverse with full join
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]

Use tf.data to shuffle and batch datasets.

#10000 is the buffer size. Is 10,000 enough for CNN? ??
#32 is the batch size.
train_ds = tf.data.Dataset.from_tensor_slices(
    (x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)

Create a tf.keras model using Keras' model subclassing API.

#Inheritance of Model class
class MyModel(Model):
#Build a model with init
  def __init__(self):
    super(MyModel, self).__init__()
#32 types of filters
#Filter size is 3*3
#The activation function is Relu
    self.conv1 = Conv2D(32, 3, activation='relu')
#Two-dimensional data*Flatten the number of filters data to one dimension
    self.flatten = Flatten()
    self.d1 = Dense(128, activation='relu')
#Since it is an output layer, the activation function is the softmax function.
    self.d2 = Dense(10, activation='softmax')

#Is x the input image group? ??
# -> tf.It seems to be tensor type data.
  def call(self, x):
    x = self.conv1(x)
    x = self.flatten(x)
    x = self.d1(x)
#Do you spit out the result calculated by NN? ??
    return self.d2(x)

model = MyModel()

Choose an optimizer and loss function for training.

#Stable cross entropy error
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
#Now (a long time ago?) The rumored ADAM
optimizer = tf.keras.optimizers.Adam()

Select metrics to measure model loss and accuracy. These metrics aggregate the values for each epoch and output the final result.

#Instance that averages the matrix
train_loss = tf.keras.metrics.Mean(name='train_loss')
#Instance that gives the correct answer rate
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')

test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')

Train your model with tf.GradientTape.

@tf.function
def train_step(image, label):
#With Record internal calculations. tape.~~I'm taking it out
  with tf.GradientTape() as tape:
#Prediction based on image, loss calculation
    predictions = model(image)
    loss = loss_object(label, predictions)
# model.trainable_Pass weights with variables
  gradients = tape.gradient(loss, model.trainable_variables)
#Update weights with optimizer
  optimizer.apply_gradients(zip(gradients, model.trainable_variables))

#Calculate loss and acciracy with the previous instance
  train_loss(loss)
  train_accuracy(label, predictions)

Reference: Introduction to TensorFlow 2.0 for low-level API users

About @ tf.function There are two main ways to turn the NN. Define-and-run: Define the calculation graph and then run the data to get the result. Define-by-run: Like the normal code of python, the graph is executed at the same time as defining the graph with x + y and the result is obtained (Source ))) It seems that tensorflow 2.0.0 uses'by' because'by' is easier to write. However,'by' is very slow, so adding @ tf.function seems to enable the same operation as'and'. For details, refer to Verification article etc.

Then test the model.

@tf.function
def test_step(image, label):
  predictions = model(image)
  t_loss = loss_object(label, predictions)

  test_loss(t_loss)
  test_accuracy(label, predictions)
EPOCHS = 5

for epoch in range(EPOCHS):
#Run the train on all images for each batch
  for image, label in train_ds:
    train_step(image, label)

#Run tests on all images for each batch
  for test_image, test_label in test_ds:
    test_step(test_image, test_label)

#Various displays
  template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
  print (template.format(epoch+1,
                         train_loss.result(),
                         train_accuracy.result()*100,
                         test_loss.result(),
                         test_accuracy.result()*100))

result

WARNING:tensorflow:Layer my_model is casting an input tensor from dtype float64 to the layer's dtype of float32, which is new behavior in TensorFlow 2.  The layer has dtype float32 because it's dtype defaults to floatx.

If you intended to run this layer in float32, you can safely ignore this warning. If in doubt, this warning is likely only an issue if you are porting a TensorFlow 1.X model to TensorFlow 2.

To change all layers to have dtype float64 by default, call `tf.keras.backend.set_floatx('float64')`. To change just this layer, pass dtype='float64' to the layer constructor. If you are the author of this layer, you can disable autocasting by passing autocast=False to the base Layer constructor.

Epoch 1, Loss: 0.14364087581634521, Accuracy: 95.62000274658203, Test Loss: 0.06367728859186172, Test Accuracy: 97.88999938964844
Epoch 2, Loss: 0.09373863786458969, Accuracy: 97.1483383178711, Test Loss: 0.056961096823215485, Test Accuracy: 98.07500457763672
Epoch 3, Loss: 0.07041392475366592, Accuracy: 97.84444427490234, Test Loss: 0.05455232039093971, Test Accuracy: 98.17666625976562
Epoch 4, Loss: 0.05662970244884491, Accuracy: 98.25749969482422, Test Loss: 0.05664524435997009, Test Accuracy: 98.19499969482422
Epoch 5, Loss: 0.047065384685993195, Accuracy: 98.54966735839844, Test Loss: 0.057572390884160995, Test Accuracy: 98.23799896240234

P.S. @```It was convenient to register in the dictionary with

Recommended Posts

Beginners read "Introduction to TensorFlow 2.0 for Experts"
[Explanation for beginners] Introduction to convolution processing (explained in TensorFlow)
Memo # 4 for Python beginners to read "Detailed Python Grammar"
An introduction to object-oriented programming for beginners by beginners
Memo # 3 for Python beginners to read "Detailed Python Grammar"
Memo # 1 for Python beginners to read "Detailed Python Grammar"
Memo # 2 for Python beginners to read "Detailed Python Grammar"
Memo # 7 for Python beginners to read "Detailed Python Grammar"
Introduction to Programming (Python) TA Tendency for beginners
Memo # 6 for Python beginners to read "Detailed Python Grammar"
Memo # 5 for Python beginners to read "Detailed Python Grammar"
[For beginners] Introduction to vectorization in machine learning
~ Tips for beginners to Python ③ ~
Introduction to Python For, While
How to learn TensorFlow for liberal arts and Python beginners
[Python] Introduction to graph creation using coronavirus data [For beginners]
TensorFlow MNIST For ML Beginners Translation
An introduction to Python for non-engineers
TensorFlow Tutorial -MNIST For ML Beginners
TensorFlow Deep MNIST for Experts Translation
[Explanation for beginners] TensorFlow tutorial Deep MNIST
An introduction to OpenCV for machine learning
Probably the most straightforward introduction to TensorFlow
Introduction to discord.py (1st day) -Preparation for discord.py-
Supplementary notes for TensorFlow MNIST For ML Beginners
[Python] Read images with OpenCV (for beginners)
An introduction to Python for machine learning
An introduction to Python for C programmers
[For beginners] Super introduction to neural networks that even cats can understand
Introduction to Graph Database Neo4j in Python for Beginners (for Mac OS X)
Introduction to Deep Learning (1) --Chainer is explained in an easy-to-understand manner for beginners-
Introduction to MQTT (Introduction)
Introduction to Scrapy (1)
[Explanation for beginners] TensorFlow basic syntax and concept
Introduction to Scrapy (3)
Introduction to Supervisor
For beginners to build an Anaconda environment. (Memo)
Introduction to Tkinter 1: Introduction
How to make Spigot plugin (for Java beginners)
Python for super beginners Python for super beginners # Easy to get angry
Roadmap for beginners
Introduction to PyQt
Introduction to Scrapy (2)
An introduction to statistical modeling for data analysis
How to use data analysis tools for beginners
[Introduction to TensorBoard] Visualize TensorFlow processing to deepen understanding
Try to calculate RPN in Python (for beginners)
[Linux] Introduction to Linux
[Introduction to Udemy Python3 + Application] 43. for else statement
Introduction to Scrapy (4)
Introduction to discord.py (2)
Conducting the TensorFlow MNIST For ML Beginners Tutorial
Tensorflow, Tensorflow After all, which one (How to read Tensorflow)
An introduction to voice analysis for music apps
Installing TensorFlow on Windows Easy for Python beginners
How to make Python faster for beginners [numpy]
[For beginners] How to study programming Private memo
[Introduction for beginners] Working with MySQL in Python
Introduction to discord.py
How to force build TensorFlow 2.3.0 for CUDA11 + cuDNN8
Understand Python for Pepper development. -Introduction to Python Box-