3. 3. AI programming with Python

Introduction

If you come to this page suddenly, please refer to parent page.

Purpose here

Use pycharm to code AI in the python language. You can copy and paste the code. You can do it even if you don't understand the meaning in the source code.

Launching the development environment

--Launch pycharm (PyCharm Community Edition) to create a Python program. Since the PJ created last time is displayed, click " mnist " Pycharm019.png

--pycharm startup screen (mnist project started) Pycharm021.png

--The source code of this sample has nothing to do with AI, so delete it all (You can easily delete the source code of this sample with CTL + A and DEL key) Pycharm022.png

AI programming with Python

--The source code is below.

main.py


# ------------------------------------------------------------------------------------------------------------
# CNN(Convolutional Neural Network)Try MNIST with
# ------------------------------------------------------------------------------------------------------------
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import confusion_matrix
from keras.datasets import mnist
from keras import backend as ke
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D


# ------------------------------------------------------------------------------------------------------------
#Hyperparameters
# ------------------------------------------------------------------------------------------------------------
#Hyperparameters ⇒ Batch size, number of epochs
#For example, training data is 60,With 000 pieces, batch_size 6,If it is 000,
#60 to use all the training data,000 pieces ÷ 6,000 = 10 parameter updates are performed.
#This is called 1 epoch. If the epoch is 10, the parameter is updated 10 × 10 = 100 times.
#epoch number is a loss function(Cost function)Set until the value of is almost converged.
batch_size = 6000           #Batch size
epochs = 5                  #Number of epochs


# ------------------------------------------------------------------------------------------------------------
#Errata function
# ------------------------------------------------------------------------------------------------------------
def show_prediction():
    n_show = 100                                 #It is difficult to display all, so display some
    y = model.predict(X_test)
    plt.figure(2, figsize=(10, 10))
    plt.gray()
    for i in range(n_show):
        plt.subplot(10, 10, (i+1))               # subplot(Number of lines,Number of columns,Plot number)
        x = X_test[i, :]
        x = x.reshape(28, 28)
        plt.pcolor(1 - x)
        wk = y[i, :]
        prediction = np.argmax(wk)
        plt.text(22, 25.5, "%d" % prediction, fontsize=12)
        if prediction != np.argmax(y_test[i, :]):
            plt.plot([0, 27], [1, 1], color='red', linewidth=10)
        plt.xlim(0, 27)
        plt.ylim(27, 0)
        plt.xticks([], "")
        plt.yticks([], "")


# ------------------------------------------------------------------------------------------------------------
#Display of keras backend
# ------------------------------------------------------------------------------------------------------------
# print(ke.backend())
# print(ke.floatx())


# ------------------------------------------------------------------------------------------------------------
#Acquisition of MNIST data
# ------------------------------------------------------------------------------------------------------------
#It takes time because the download occurs the first time
# 60,10 black and white images of 10 numbers represented by 000 28x28 dots and 10,000 test image dataset
#Download location:'~/.keras/datasets/'
#* If MNIST data download is NG, review the PROXY settings.
#
#MNIST data
#├ Teacher data(60,000 pieces)
#│ ├ Image data
#│ └ Label data
#  │
#└ Verification data(10,000 pieces)
#├ Image data
#└ Label data

#↓ Teacher data ↓ Verification data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
#↑ Image ↑ Label ↑ Image ↑ Label


# ------------------------------------------------------------------------------------------------------------
#image data(Teacher data, validation data)Reshape
# ------------------------------------------------------------------------------------------------------------
img_rows, img_cols = 28, 28
if ke.image_data_format() == 'channels_last':
    X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
    X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
    input_shape = (img_rows, img_cols, 1)
else:
    X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
    X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
    input_shape = (1, img_rows, img_cols)

#Array formatting and color range converted from 0 to 255 → 0 to 1
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255


# ------------------------------------------------------------------------------------------------------------
#Label data(Teacher data, validation data)Vectorization of
# ------------------------------------------------------------------------------------------------------------
y_train = np_utils.to_categorical(y_train)      #Teacher label vectorization
y_test = np_utils.to_categorical(y_test)        #Validation label vectorization


# ------------------------------------------------------------------------------------------------------------
#Network definition(keras)
# ------------------------------------------------------------------------------------------------------------
print("")
print("● Network definition")
model = Sequential()

#Input layer 28 × 28 × 3
model.add(Conv2D(16, kernel_size=(3, 3), activation='relu', input_shape=input_shape, padding='same'))     #01 layer: 16 folding layers
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))                                          #02 layer: 32 folding layers
model.add(MaxPooling2D(pool_size=(2, 2)))                                                                 #03 layer: pooling layer
model.add(Dropout(0.25))                                                                                  #Layer 04: Dropout
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))                                          #05 layer: 64 folding layers
model.add(MaxPooling2D(pool_size=(2, 2)))                                                                 #06 layer: pooling layer
model.add(Flatten())                                                                                      #08 layer: dimensional conversion
model.add(Dense(128, activation='relu'))                                                                  #09 layer: fully combined output 128
model.add(Dense(10, activation='softmax'))                                                                #10 layers: fully coupled output 10

#model display
model.summary()

#compile
#Loss function: categorical_crossentropy (Cross entropy)
#Optimization: Adam
model.compile(loss='categorical_crossentropy',
              optimizer='Adam',
              metrics=['accuracy'])

print("")
print("● Start learning")
f_verbose = 1  # 0:No display, 1: Detailed display, 2: Display
hist = model.fit(X_train, y_train,
                 batch_size=batch_size,
                 epochs=epochs,
                 validation_data=(X_test, y_test),
                 verbose=f_verbose)


# ------------------------------------------------------------------------------------------------------------
#Loss value graphing
# ------------------------------------------------------------------------------------------------------------
# Accuracy (Correct answer rate)
plt.plot(range(epochs), hist.history['accuracy'], marker='.')
plt.plot(range(epochs), hist.history['val_accuracy'], marker='.')
plt.title('Accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='lower right')
plt.show()

# loss (Loss function)
plt.plot(range(epochs), hist.history['loss'], marker='.')
plt.plot(range(epochs), hist.history['val_loss'], marker='.')
plt.title('loss Function')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper right')
plt.show()


# ------------------------------------------------------------------------------------------------------------
#Test data validation
# ------------------------------------------------------------------------------------------------------------
print("")
print("● Verification result")
t_verbose = 1  # 0:No display, 1: Detailed display, 2: Display
score = model.evaluate(X_test, y_test, verbose=t_verbose)

print("")
print("batch_size = ", batch_size)
print("epochs = ", epochs)

print('Test loss:', score[0])
print('Test accuracy:', score[1])


print("")
print("● Confusion matrix(Confusion matrix)Horizontal: Identification result, Vertical: Correct answer data")
predict_classes = model.predict_classes(X_test[1:60000, ], batch_size=batch_size)
true_classes = np.argmax(y_test[1:60000], 1)
print(confusion_matrix(true_classes, predict_classes))


# ------------------------------------------------------------------------------------------------------------
#Errata display
# ------------------------------------------------------------------------------------------------------------
show_prediction()
plt.show()

--Please copy this source code into the previous field (inside the red frame). prog001.png

--Click [Probrems] on the tab in the lower pane to check the contents. In the red line frame as shown below ! If there is a red circle , the library is insufficient and an error has occurred. prog009.png

prog001+.png

--You can also check the missing libraries in the source code. Where there is no library in the source code, it is underlined in red. prog002.png

--The missing libraries are summarized below. The library is included in the package. Note that only the sklearn library is included in a package called scikit-learn.

No. Missing library name Required package name
1 keras keras
2 numpy numpy
3 matplotlib matplotlib
4 sklearn scikit-learn

Package installation

● Add a package from anaconda

  1. Start anaconda and click [ʻEnvionments]-> [ python37`]

● Install the keras package

  1. Change [ʻInstalled] to [ʻAll]
  2. Enter [keras] in the search box to search for keras packages
  3. Set the [keras] check box to ON
  4. Click [ʻApply`] at the bottom right to adapt prog003.png

--When the Install Packages message box appears, click [ʻApply`] to install the keras package. prog004.png

● Install the numpy package

  1. Enter [numpy] in the search box to search for numpy packages
  2. I was able to confirm that the numpy package was installed.

Since keras uses numpy, numpy was also installed automatically when installing the keras package due to dependencies. So, I was able to omit the installation work of numpy. prog005.png

● Install the matplotlib package

prog006.png

  1. Enter [matplotlib] in the search box to search for matplotlib packages
  2. Set the [matplotlib] check box to ON
  3. Click [ʻApply`] at the bottom right to adapt
  4. When the Install Packages message box appears, click [ʻApply`] to install the matplotlib package.

● Install the scikit-learn package

  1. Enter [scikit-learn] in the search box to search for the scikit-learn package.
  2. Set the [scikit-learn] check box to ON
  3. Click [ʻApply`] at the bottom right to adapt
  4. When the Install Packages message box appears, click [ʻApply`] to install the scikit-learn package.

Program execution

--Confirm that all errors are gone.

  1. Click [Problems]
  2. Confirm that ! Red circle does not appear in Problems

prog009.png prog010.png

--Click "" on the upper right to execute the program. prog011.png

Output result

――If you can do it properly, you will get the following results. --This time, the result was "Test accuracy: 0.9359999895095825", so the recognition rate was 93.4%.

C:\Users\xxxx\anaconda3\envs\python37\python.exe C:/Users/xxxx/PycharmProjects/mnist_sample/qiita.py
Using TensorFlow backend.

● Network definition
2020-08-06 11:36:11.346263: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 28, 28, 16)        160       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 28, 28, 32)        4640      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 14, 14, 32)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 14, 14, 32)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 14, 14, 32)        9248      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 7, 7, 32)          0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 1568)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 128)               200832    
_________________________________________________________________
dense_2 (Dense)              (None, 10)                1290      
=================================================================
Total params: 216,170
Trainable params: 216,170
Non-trainable params: 0
_________________________________________________________________

● Start learning
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
2020-08-06 11:36:12.480915: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 602112000 exceeds 10% of system memory.
2020-08-06 11:36:14.075159: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 602112000 exceeds 10% of system memory.

 6000/60000 [==>...........................] - ETA: 36s - loss: 2.3063 - accuracy: 0.0653
12000/60000 [=====>........................] - ETA: 32s - loss: 2.2858 - accuracy: 0.1563
18000/60000 [========>.....................] - ETA: 29s - loss: 2.2630 - accuracy: 0.2346
24000/60000 [===========>..................] - ETA: 24s - loss: 2.2374 - accuracy: 0.2971
30000/60000 [==============>...............] - ETA: 20s - loss: 2.2083 - accuracy: 0.3415
36000/60000 [=================>............] - ETA: 16s - loss: 2.1742 - accuracy: 0.3779
42000/60000 [====================>.........] - ETA: 12s - loss: 2.1342 - accuracy: 0.4095
48000/60000 [=======================>......] - ETA: 8s - loss: 2.0883 - accuracy: 0.4363 
54000/60000 [==========================>...] - ETA: 4s - loss: 2.0373 - accuracy: 0.4610
60000/60000 [==============================] - 44s 733us/step - loss: 1.9787 - accuracy: 0.4864 - val_loss: 1.3384 - val_accuracy: 0.7674
Epoch 2/5

 6000/60000 [==>...........................] - ETA: 39s - loss: 1.3002 - accuracy: 0.7305
12000/60000 [=====>........................] - ETA: 37s - loss: 1.2238 - accuracy: 0.7381
18000/60000 [========>.....................] - ETA: 33s - loss: 1.1505 - accuracy: 0.7432
24000/60000 [===========>..................] - ETA: 27s - loss: 1.0788 - accuracy: 0.7513
30000/60000 [==============>...............] - ETA: 23s - loss: 1.0145 - accuracy: 0.7597
36000/60000 [=================>............] - ETA: 18s - loss: 0.9617 - accuracy: 0.7652
42000/60000 [====================>.........] - ETA: 14s - loss: 0.9165 - accuracy: 0.7698
48000/60000 [=======================>......] - ETA: 9s - loss: 0.8742 - accuracy: 0.7754 
54000/60000 [==========================>...] - ETA: 4s - loss: 0.8390 - accuracy: 0.7804
60000/60000 [==============================] - 50s 831us/step - loss: 0.8084 - accuracy: 0.7856 - val_loss: 0.4861 - val_accuracy: 0.8541
Epoch 3/5

 6000/60000 [==>...........................] - ETA: 41s - loss: 0.4924 - accuracy: 0.8445
12000/60000 [=====>........................] - ETA: 36s - loss: 0.4970 - accuracy: 0.8453
18000/60000 [========>.....................] - ETA: 32s - loss: 0.5020 - accuracy: 0.8486
24000/60000 [===========>..................] - ETA: 28s - loss: 0.5005 - accuracy: 0.8508
30000/60000 [==============>...............] - ETA: 23s - loss: 0.4866 - accuracy: 0.8547
36000/60000 [=================>............] - ETA: 19s - loss: 0.4774 - accuracy: 0.8578
42000/60000 [====================>.........] - ETA: 14s - loss: 0.4730 - accuracy: 0.8603
48000/60000 [=======================>......] - ETA: 9s - loss: 0.4721 - accuracy: 0.8622 
54000/60000 [==========================>...] - ETA: 4s - loss: 0.4641 - accuracy: 0.8648
60000/60000 [==============================] - 52s 862us/step - loss: 0.4574 - accuracy: 0.8666 - val_loss: 0.3624 - val_accuracy: 0.9004
Epoch 4/5

 6000/60000 [==>...........................] - ETA: 44s - loss: 0.3941 - accuracy: 0.8850
12000/60000 [=====>........................] - ETA: 40s - loss: 0.3863 - accuracy: 0.8882
18000/60000 [========>.....................] - ETA: 34s - loss: 0.3731 - accuracy: 0.8912
24000/60000 [===========>..................] - ETA: 29s - loss: 0.3659 - accuracy: 0.8943
30000/60000 [==============>...............] - ETA: 25s - loss: 0.3545 - accuracy: 0.8971
36000/60000 [=================>............] - ETA: 20s - loss: 0.3461 - accuracy: 0.8987
42000/60000 [====================>.........] - ETA: 15s - loss: 0.3417 - accuracy: 0.9001
48000/60000 [=======================>......] - ETA: 10s - loss: 0.3421 - accuracy: 0.9008
54000/60000 [==========================>...] - ETA: 5s - loss: 0.3367 - accuracy: 0.9023 
60000/60000 [==============================] - 52s 874us/step - loss: 0.3332 - accuracy: 0.9033 - val_loss: 0.2740 - val_accuracy: 0.9225
Epoch 5/5

 6000/60000 [==>...........................] - ETA: 44s - loss: 0.2830 - accuracy: 0.9168
12000/60000 [=====>........................] - ETA: 39s - loss: 0.2939 - accuracy: 0.9151
18000/60000 [========>.....................] - ETA: 35s - loss: 0.2872 - accuracy: 0.9168
24000/60000 [===========>..................] - ETA: 30s - loss: 0.2782 - accuracy: 0.9193
30000/60000 [==============>...............] - ETA: 25s - loss: 0.2782 - accuracy: 0.9188
36000/60000 [=================>............] - ETA: 20s - loss: 0.2733 - accuracy: 0.9200
42000/60000 [====================>.........] - ETA: 15s - loss: 0.2686 - accuracy: 0.9217
48000/60000 [=======================>......] - ETA: 10s - loss: 0.2684 - accuracy: 0.9222
54000/60000 [==========================>...] - ETA: 4s - loss: 0.2654 - accuracy: 0.9233 
60000/60000 [==============================] - 52s 872us/step - loss: 0.2634 - accuracy: 0.9236 - val_loss: 0.2180 - val_accuracy: 0.9360

● Verification result

   32/10000 [..............................] - ETA: 5s
  320/10000 [..............................] - ETA: 2s
  608/10000 [>.............................] - ETA: 2s
  928/10000 [=>............................] - ETA: 1s
 1248/10000 [==>...........................] - ETA: 1s
 1568/10000 [===>..........................] - ETA: 1s
 1920/10000 [====>.........................] - ETA: 1s
 2272/10000 [=====>........................] - ETA: 1s
 2624/10000 [======>.......................] - ETA: 1s
 2976/10000 [=======>......................] - ETA: 1s
 3328/10000 [========>.....................] - ETA: 1s
 3680/10000 [==========>...................] - ETA: 1s
 4032/10000 [===========>..................] - ETA: 1s
 4384/10000 [============>.................] - ETA: 1s
 4736/10000 [=============>................] - ETA: 0s
 5088/10000 [==============>...............] - ETA: 0s
 5408/10000 [===============>..............] - ETA: 0s
 5728/10000 [================>.............] - ETA: 0s
 6048/10000 [=================>............] - ETA: 0s
 6368/10000 [==================>...........] - ETA: 0s
 6560/10000 [==================>...........] - ETA: 0s
 6816/10000 [===================>..........] - ETA: 0s
 7104/10000 [====================>.........] - ETA: 0s
 7392/10000 [=====================>........] - ETA: 0s
 7680/10000 [======================>.......] - ETA: 0s
 8000/10000 [=======================>......] - ETA: 0s
 8320/10000 [=======================>......] - ETA: 0s
 8640/10000 [========================>.....] - ETA: 0s
 8960/10000 [=========================>....] - ETA: 0s
 9280/10000 [==========================>...] - ETA: 0s
 9600/10000 [===========================>..] - ETA: 0s
 9920/10000 [============================>.] - ETA: 0s
10000/10000 [==============================] - 2s 196us/step

batch_size =  6000
epochs =  5
Test loss: 0.21799209741055967
Test accuracy: 0.9359999895095825

● Confusion matrix(Confusion matrix)Horizontal: Identification result, Vertical: Correct answer data
[[ 966    0    1    1    0    1    6    1    4    0]
 [   0 1108    4    2    0    0    3    1   17    0]
 [  12    2  954   18    7    0    7    8   21    3]
 [   2    2    7  938    0   24    0   11   19    7]
 [   1    2    4    1  908    0   10    3    5   48]
 [   5    1    3   18    0  834    9    2   14    6]
 [  18    4    2    2    6   14  906    2    4    0]
 [   1    5   26    7    7    1    0  916    4   60]
 [  10    0    5   23    9   18    8    4  878   19]
 [  10    5    3   13    8    6    0    7    6  951]]

Process finished with exit code 0

Correct answer rate

――You can see that the correct answer rate is increasing steadily as the number of learnings is increased. exe001.PNG

Loss function result

――You can see that the loss rate is steadily decreasing as the number of learnings increases. exe002.PNG

errata

――There are 10,000 verification data, but it is difficult to output all of them, so we output the results up to the first 100. ――The one shown by the red line is the one that AI caused a recognition error. --The numbers recognized by AI are listed in the lower right corner of the frame. exe003.PNG

that's all

Recommended Posts

3. 3. AI programming with Python
Python programming with Atom
Competitive programming with python
Programming with Python Flask
Programming with Python and Tkinter
Make Puyo Puyo AI with Python
Network programming with Python Scapy
[Python] Object-oriented programming learned with Pokemon
Easy Python + OpenCV programming with Canopy
FizzBuzz with Python3
Python programming note
Scraping with Python
Statistics with python
Scraping with Python
Python with Go
Twilio with Python
Integrate with Python
Play with 2016-Python
AES256 with python
Tested with Python
Programming in python
python starts with ()
with syntax (Python)
Bingo with python
Zundokokiyoshi with python
Excel with Python
Microcomputer with Python
Cast with python
Competitive programming with python Local environment settings
[Episode 3] Beginners tried Numeron AI with python
"Python AI programming" starting from 0 for windows
[Episode 0] Beginners tried Numeron AI with python
[Episode 1] Beginners tried Numeron AI with python
Build AI / machine learning environment with Python
Serial communication with Python
Zip, unzip with python
Django 1.11 started with Python3.6
Primality test with Python
Socket communication with Python
Data analysis with python 2
Try scraping with Python.
Asynchronous programming with libev # 2
Learning Python with ChemTHEATER 03
Sequential search with Python
"Object-oriented" learning with python
I made a competitive programming glossary with Python
How to enjoy programming with Minecraft (Ruby, Python)
Run Python with VBA
Handling yaml with python
Solve AtCoder 167 with python
Serial communication with python
[Python] Use JSON with Python
Learning Python with ChemTHEATER 05-1
Run prepDE.py with python3
1.1 Getting Started with Python
Collecting tweets with Python
Binarization with OpenCV / Python
Kernel Method with Python
Non-blocking with Python + uWSGI
Scraping with Python + PhantomJS
[Final story] Beginners tried Numeron AI with python