Until you try to let DNN learn the truth of the image using Colab

Introduction

I wanted to learn the truth of the image at Colaboratory, but it was difficult to collect information in various places. So I decided to ** paste the finished product as it is **. For those who understand Python somehow.

Don't https://colab.research.google.com/drive/1ETjxVKCA3zv391tEAY5RHM_cyipIA9D-?hl=ja

I don't think anyone can use it as it is, but I hope it helps someone. ** I made it without knowing Jupyter-notebook or Python, so I hope you can point out something. ** **

What I wanted to do

This is what I wanted to do.

--Use Colaboratory ――I want to learn the truth of the image --Deep learning --Training data (image) is on Google Drive --The image is a PNG file with a width of 100px and a height of 150px. --The image is compressed with ZIP, and the folder structure is as follows. ――I want to see the learning history in a graph, and I want to save the graph and model information in Google Drive.

ZIP file folder structure

What you are doing

Write an explanation of what you are doing easily. Visit the Colaboratory and see the code.

First cell (Google Drive mount)

Mount Google Drive. As you can see. It seems that GitHub is often used in the world. With Google Drive, authentication is troublesome every time.

Second cell (expansion of training data)

Extract the ZIP file of the training data in Google Drive to a disk. It is possible to read each image of training data from Google Drive, but it is slow. I do this because it's much faster to zip it up, unzip it all to disk, and then read it from disk.

Third cell (reading training data)

I wanted the training data and test data to have the same number of truths, so I made a generator generate_paths that shuffles and returns image paths alternately. That is, As false 0-1, 0-2, 0-3, 0-4, 0-5 As true 1-1, 1-2, 1-3, 1-4, 1-5 Assuming there were 10 images of 0-3, 1-5, 0-1, 1-1, 0-2, 1-3, 0-4, 1-4, 0-5, 1-2 Returns like.

It might have been easier to understand if you shuffled the list normally without using a generator.

In the load_data method, the image is actually read and made into an array. Most of the images dealt with this time have a white background, so I set them to 1 --X / 255 and invert them to a floating point number of 0 to 1. It is roughly 0, and sometimes there is a 1. There are also color images, so the number of input channels is 3.

Fourth cell (learning)

I will do my best to learn. Don't forget to save the time you finished so you know when you finished learning.

import datetime
tz_jst = datetime.timezone(datetime.timedelta(hours=9), name="JST")
now = datetime.datetime.now(tz_jst)
str_now = now.strftime("%Y/%m/%d %H:%M:%S")
print(f"Training is over{str_now}")

Save the entire model and its architecture.

5th cell (draw graph)

Display it on the screen with % matplotlib inline and save it in Google Drive.

Conclusion

Colaboratory, GPU, etc. are very convenient to use for free. It would be nice to be able to access it from anywhere, even from an iPhone. However, it's sad that GPU and TPU can't be used in the daytime.

Regarding the learning result, accuracy, over 0.7 ... I tried various trials and errors, but it's a little subtle ... I can't say what the training data is, but I believe the training data is bad (or rather bad) for this result.

Recommended Posts

Until you try to let DNN learn the truth of the image using Colab
Try using n to downgrade the version of Node.js you have installed
Try and learn iptables, until you can browse the web
[Python] Try to graph from the image of Ring Fit [OCR]
Until you try the Google Cloud Vision API (harmful image detection)
Try to edit a new image using the trained StyleGAN2 model
I tried to transform the face image using sparse_image_warp of TensorFlow Addons
I tried to get the batting results of Hachinai using image processing
Try to image the elevation data of the Geographical Survey Institute with Python
Try to get the road surface condition using big data of road surface management
Let the machine "learn" the rules of FizzBuzz
Let the COTOHA API do the difficult things-Introduction to "learn using" natural language processing-
[Python] I tried to judge the member image of the idol group using Keras
How to get the notebook name you are currently using in Google Colab
Try adding fisheye lens distortion to the image
Check the type of the variable you are using
I tried to correct the keystone of the image
Try using the collections module (ChainMap) of python3
I tried using the image filter of OpenCV
Try to simulate the movement of the solar system
Until you commit the source code to the public branch of AGL (Automotive Grade Linux)
Until you commit the source code to the public branch of AGL (Automotive Grade Linux) 2
I tried to extract the text in the image file using Tesseract of the OCR engine
Try to estimate the number of likes on Twitter
Try to get the contents of Word with Golang
The story of using circleci to build manylinux wheels
I tried to compress the image using machine learning
I tried to find the entropy of the image with python
Try to get the function list of Python> os package
CNN with keras Try it with the image you picked up
Try to evaluate the performance of machine learning / regression model
Try a similar search for Image Search using the Python SDK [Search]
Try to evaluate the performance of machine learning / classification model
Try to improve the accuracy of Twitter like number estimation
Try to solve the problems / problems of "Matrix Programmer" (Chapter 0 Functions)
Try to automate the operation of network devices with Python
Try to model a multimodal distribution using the EM algorithm
Try to extract the features of the sensor data with CNN
When you want to use multiple versions of the same Python library (virtual environment using venv)
Learn accounting data and try to predict accounts from the content of the description when entering journals