This article is the 23rd day article of Tech-Circle Hands on Advent Calendar 2016. Posting has been delayed.
The article on the 22nd is [@ morio36](http://qiita.com/morio36 "morio36" "Theta automatic shooting when approaching a specific location using Wikitude" (http://qiita.com/morio36/items/3a461e2f26f82e23d059 "When approaching a specific location using Wikitude" It was Theta automatic shooting "). A very interesting article called AR using Theta!
Do you like Pokemon? Although new titles have been released recently, it seems that 801 types have appeared now.
As the number increases, similar Pokemon will come out. Isn't it difficult to correctly identify Pokemon among them? Quoted from below http://www.pokemon.co.jp/ex/sun_moon/ http://www.pokemon.jp/
So, using deep learning, which is popular these days This article is about trying to identify Pokemon.
The purpose of this article is to implement and run deep learning for the first time. There is no detailed explanation of the method, but through the points that I found by actually moving it and the points that I stumbled upon I hope you will be interested.
Learn the images of "Pikachu" and "Dedenne" The goal is to be able to correctly distinguish the two in the test.
The following environment was used. ・ Mac (OS X El Capitan 10.11.6) -Python 3.5.0
Tensorflow published by Google as a deep learning library, Make OpenCV available as an image recognition library.
Install Anaconda, Python's package management system. Python package management systems include pip, but Anaconda is recommended in relation to using OpenCV. (I accidentally lost a few hours of my choice here.) The reason will be described later.
For the installation method, I referred to here. Install Anaconda of Python3 series.
I also need to install Homebrew and pyenv, but I was able to proceed without any problems.
Also, for an introduction and explanation about Python, the article here is recommended, so please take a look!
You can also develop with scratch when implementing deep learning, Use the library tensorflow. I referred to [here](http://pythondatascience.plavox.info/tensorflow%E3%81%A7%E3%83%87%E3%82%A3%E3%83%BC%E3%83%97 % E3% 83% A9% E3% 83% BC% E3% 83% 8B% E3% 83% B3% E3% 82% B0 / ubuntu-linux% E3% 81% AB% E3% 82% A4% E3% 83 % B3% E3% 82% B9% E3% 83% 88% E3% 83% BC% E3% 83% AB /).
One of the things that stumbled a little was about activating the environment. In my environment, I couldn't activate according to the article, so I did the following:
source ~/.pyenv/versions/Anaconda version/bin/activate virtual environment name
Also, since the corresponding article is for linux, it is necessary to read the version for Mac when installing tensorflow.
OK if you can import tensorflow in python interactive mode.
$ python
Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:52:12)
[GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>>
This is the most packed. For Python3 series, you need to install OpenCV3. (OpenCV2 for Python2 series)
You can also install OpenCV3 with the following command in Homebrew, I just couldn't solve it. (As far as I can find it on the internet, it seems that some people can do it)
brew install opencv3 --with-python3
That's where Anaconda comes in. Since OpenCV3 is already prepared as an Anaconda library, You can solve it quickly by installing with the following command!
conda install -c https://conda.anaconda.org/menpo opencv3
When I checked it after installation, it seems to be working. (Import both OpenCV2 and OpenCV3 as cv2)
$ python
Python 3.5.2 |Continuum Analytics, Inc.| (default, Jul 2 2016, 17:52:12)
[GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import cv2
>>> cv2.__version__
'3.1.0'
>>>
The environment is now ready!
Speaking of images in deep learning, the method called Convolutional Neural Network (CNN) is orthodox, so I will make a judgment with CNN this time as well. In addition, OpenCV is used to unify the size of the teacher image.
This time, for convenience, I will almost use the code of Identify the production company of anime Yuruyuri with TensorFlow and implement it. did.
Originally, how to implement it was the real pleasure, but I made it my first goal to move it. The accuracy will change depending on the filter settings and layer configuration, which are the key to CNN. I hope to deepen my understanding by comparing it with the implementation while encouraging study in the future.
Fixed the part related to the description for python2 system and the runtime path.
・ Lines 156 and 175 Since the teacher data and test data were intended for images arranged in the same layer, they were corrected to arbitrary places. ・ Line 214 When deciding the training to be executed for each batch size, the type given to range becomes float and an error occurs, so cast it to int type ・ Lines 228 and 238 Since it became a print function in python3, parentheses are added
The teacher image data is placed in the directory specified in the above correction points. We placed 100 images in total of 2 images (Pikachu Dedenne) acquired by google image search. I excluded images that are irrelevant in the keyword search results, but this was quite difficult.
In addition to the teacher image, we prepared 100 test images in total.
[What I couldn't do this time] ・ The number of teacher data is overwhelmingly insufficient. There is a method of image processing (reversal, brightness adjustment, etc.) as an inflating method to increase the number of data, but this time it is not done. -Although image resizing is performed within the implementation, it is said that pre-processing such as normalization is performed, but it is not performed.
It is placed in the same layer as the executable file as train.txt and test.txt. The format is as follows.
File name Label name
Let's do it!
$ python pika_pre.py
step 0, training accuracy 0.590909
step 1, training accuracy 0.409091
step 2, training accuracy 0.654545
step 3, training accuracy 0.7
step 4, training accuracy 0.745455
step 5, training accuracy 0.772727
step 6, training accuracy 0.827273
step 7, training accuracy 0.890909
step 8, training accuracy 0.918182
step 9, training accuracy 0.845455
step 10, training accuracy 0.881818
step 11, training accuracy 0.954545
step 12, training accuracy 0.954545
step 13, training accuracy 0.954545
step 14, training accuracy 0.972727
step 15, training accuracy 0.972727
step 16, training accuracy 0.972727
step 17, training accuracy 0.963636
step 18, training accuracy 0.963636
step 19, training accuracy 0.972727
step 20, training accuracy 0.972727
step 21, training accuracy 0.981818
step 22, training accuracy 0.990909
step 23, training accuracy 1
step 24, training accuracy 1
abridgement
test accuracy 0.838095
The percentage of correct answers during training is output as training accuracy. In step 0 it is 59%, but It was 100% after step 23.
In addition, the result of judging the test data after the learning was completed was a correct answer rate of 83%.
This feels a little suspicious. Considering the quantity and quality of teacher data, I expected that it would not be highly accurate in such a short time.
In tensorflow, you can easily visualize the learning process and data flow by using a function called tensorboard, so let's check that as well. Execute the following command, and then open the displayed URL in your browser.
tensorboard --logdir /tmp/data
Starting TensorBoard b'29' on port 6006
(You can navigate to http://192.168.11.7:6006)
Hmmm, is it so accurate? It seems that it is necessary to consider whether it is working as intended.
I was able to learn the following when I tried it for the first time this time. ・ Environmental preparation (Anaconda, tensorflow, OpenCV) ・ Difficulty in preparing teacher data ・ How to use tensorboard
The next place I would like to try is as follows. ・ Implementation from 0 ・ Understanding tensorflow ・ Understanding and experimenting with CNN ・ Pre-processing of teacher data
"You are now! We have taken the first step towards implementing deep learning!" So I will continue to study machine learning.
http://qiita.com/icoxfog417/items/02a80b93b5f1e95f2795 http://qiita.com/icoxfog417/items/5fd55fad152231d706c2 http://qiita.com/bohemian916/items/9630661cd5292240f8c7 http://qiita.com/icoxfog417/items/65e800c3a2094457c3a0 http://qiita.com/shim0mura/items/b0ec437206ed3d19d878 http://qiita.com/icoxfog417/items/fb5c24e35a849f8e2c5d
Tomorrow, or the 24th day, is @ shiraco's "Natural Language Processing Technology that Supports Dialogue Systems"! looking forward to!