Deep learning image analysis starting with Kaggle and Keras

Introduction

This article introduces Kaggle's development environment (Kernel notebook) and the classification of MNIST datasets using Keras as an introduction to painter analysis. Kernel is open to the public on Kaggle, so if you want to run it yourself, please have a look.

If you have any mistakes, questions or comments, please let us know. It will be encouraging if you can get LGTM!

What is Kaggle

Kaggle is the world's largest online analytical competition. In addition, there is an online development environment called Kernel notebook that allows you to work on analysis immediately, which is ideal for getting started with data analysis. This article is intended for those who have already registered with Kaggle and know how to use Kernel. (There are many articles that even those who do not know can be helpful, so I think you can catch up immediately.)

See below for more details. -Articles useful for those who are new to Kaggle

What is Keras

Keras is a deep learning library based on Tensorflow, and is known for being able to build deep learning models very quickly.

See below for more details. -[Even beginners can understand] What is Keras? Carefully explain from the basics! ](Https://udemy.benesse.co.jp/ai/keras.html) -[Deep Learning with Python and Keras](https://www.amazon.co.jp/Python%E3%81%A8Keras%E3%81%AB%E3%82%88%E3%82%8B%E3% 83% 87% E3% 82% A3% E3% 83% BC% E3% 83% 97% E3% 83% A9% E3% 83% BC% E3% 83% 8B% E3% 83% B3% E3% 82% B0-Francois-Chollet / dp / 4839964262 / ref = tmm_pap_swatch_0? _Encoding = UTF8 & qid = & sr =)

What is MNIST

It is a very famous image data set, which consists of handwritten numbers (0-9) and the numbers that they truly represent (correct labels). image.png

See below for more details.

Main subject

Again, this article is also posted on Kaggle's Kernel, so if you want to fork, please see that as well.

Data loading and checking

You can see that the training data consists of 42000 pieces of data. Also, although there are 785 columns, since the label is correct at first, we can see that the features used for learning consist of 784. You can see that the label is an integer and the feature is 0 for most elements.

#Loading training data
train = pd.read_csv("../input/train.csv")
print(train.shape)
train.head()

image.png

There are 28000 test data. There is no correct label, so it consists of 784 columns.

#Load test data
test= pd.read_csv("../input/test.csv")
print(test.shape)
test.head()

image.png

Performs type conversion and conversion to easy-to-use numpy data.


#Cut out the feature amount part of the training data excluding the correct answer label.
X_train = train.iloc[:,1:].values.astype('float32')
#Of the training data, only the correct label is cut out.
y_train = train.iloc[:,0].values.astype('int32')
#This is test data.
X_test = test.values.astype('float32')

Let's look at the percentage of 0 in the data. The ratio of 0 is about 80%, and you can see that most of the elements are 0. In this dataset, 0 means whitespace = uncharacterized area.


print(f"The percentage of non-zero elements{round((X_train > 0).sum()/(X_train >= 0 ).sum()*100)}%is")
print(f"The percentage of 0 elements{round((X_train == 0).sum()/(X_train >= 0 ).sum()*100)}%is")

image.png

Looking at the distribution of correct labels, we can see that every label from 0 to 9 constitutes about 10%. Therefore, it seems that adjustment due to class imbalance is not necessary.



#Display the percentage of correct labels in a pie chart
f, ax = plt.subplots(1,figsize=(6,6))
y_count = pd.Series(y_train).value_counts().sort_index()
ax.pie(y_count.values , 
       labels=y_count.index, 
       autopct='%1.1f%%', 
       counterclock = False,
       startangle = 90)
plt.show()

image.png

Next, transform the data into a form that is easy to handle as an image. As we saw earlier, each image consists of 784 elements, which is actually a one-dimensional collapse of square data consisting of 28 vertical elements and 28 horizontal elements. It transforms to 28x28 for all training data (42000 = X_train.shape [0]).


#Data transformation
X_train = X_train.reshape(X_train.shape[0], 28, 28)

Let's visualize the transformed raw data. Here, in order to consider the meaning of raw data, let's express the data as a character string without using an image library. Let's display the part where the non-zero element exists as #. Then, you will get an output that looks like handwritten numbers.

The original data is a list of numbers that are difficult for humans to understand, but in reality it was data consisting of the intention of ink (whether it is not 0 or not) and its density (number value).


#Convert data to character string and display
def visualize_str(d):
    d = d.astype("int32").astype("str")    
    d[d != "0"] = "# "
    d[d == "0"] = ". "
    d = pd.DataFrame(d)
    for i in range(d.shape[0]):
        print("".join(d.iloc[i,:]))
    print("") 
        
for i in range(1):
    visualize_str(X_train[i])

image.png

If you try to display it as an image, you will still get the same output as the previous character string display.


#Visualize with image
f, ax = plt.subplots(1,3)
for i in range(3):   
    ax[i].imshow(X_train[i], cmap=plt.get_cmap('gray'))

image.png

Since colored images are sometimes used in image analysis, it is assumed that the input to the model has a dimension called a color channel that includes shades of color (mainly Mihara). Since this time it is grayscale, there is no new data, but we will convert the tensor according to the above idea. In addition, set One hot encoding and random numbers as a preparation before learning.


#Add color channel
X_train = X_train.reshape(X_train.shape[0], 28, 28,1)
X_test = X_test.reshape(X_test.shape[0], 28, 28,1)

#One hot encoding
from keras.utils.np_utils import to_categorical
y_train= to_categorical(y_train)

#Fixed random numbers for reproducibility
seed = 0
np.random.seed(seed)

Model 1: Linear model

It's finally modeling, but first let's try a very simple linear model.

And before that, define the required module import and standardization functions.

from keras.models import  Sequential
from keras.layers.core import  Lambda , Dense, Flatten, Dropout
#from keras.callbacks import EarlyStopping
from keras.layers import BatchNormalization, Convolution2D , MaxPooling2D

#Definition of standardized function
mean_X = X_train.mean().astype(np.float32)
std_X = X_train.std().astype(np.float32)

def standardize(x): 
    return (x-mean_X)/std_X

Now let's create the model.


#Definition of linear model

model_linear= Sequential()
#Standardization
model_linear.add(Lambda(standardize,input_shape=(28,28,1)))
#Change to one dimension to put in the fully connected layer
model_linear.add(Flatten())
#Fully connected layer
model_linear.add(Dense(10, activation='softmax'))
#Model visualization
print("model_linear")
model_linear.summary()

image.png

Keras compiles after defining the model. In compile, the index to be optimized by learning = loss and the index to be truly optimized are specified.

#Compiling the model
#Specify the index to optimize and the index to observe
from keras.optimizers import Adam ,RMSprop

model_linear.compile(optimizer=RMSprop(lr=0.001),
                     loss='categorical_crossentropy',
                     metrics=['accuracy'])

Now that the model is ready, I want to populate the model with data, but I need to prepare the data. Prepare a generator and split the data for cross-validation. (This should be called the holdout method, but in Kaggle it is customarily expressed as cross-validation, so I am aware of the misunderstanding and express it that way.)


#Preparation of data to be input

#Definition of generator
from keras.preprocessing import image
generator = image.ImageDataGenerator()

#All learning data used at the time of submission
X = X_train
y = y_train

#Cross-validation
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.15, random_state=seed)
train_generator = generator.flow(X_train, y_train, batch_size=64)
val_generator = generator.flow(X_val, y_val, batch_size=64)

Also prepare a tensor board to make it easier to see the learning situation and results. The standard output will output the url, so open it in your browser.


#Launch of tensorboard

import tensorflow as tf 
!rm -rf ./logs/ 
!mkdir ./logs/
!wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
!unzip ngrok-stable-linux-amd64.zip

import os
import multiprocessing
import datetime

pool = multiprocessing.Pool(processes = 10)
results_of_processes = [pool.apply_async(os.system, args=(cmd, ), callback = None )
                        for cmd in [
                        f"tensorboard --logdir ./logs/ --host 0.0.0.0 --port 6006 &",
                        "./ngrok http 6006 &","y"
                        ]]
! curl -s http://localhost:4040/api/tunnels | python3 -c \
    "import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"

        
log_dir = "logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir, histogram_freq=0)

--Example of standard output: https://1aa43df57b90.ngrok.io

Now that we're ready, it's time to start learning. It shows the progress of learning with the progress bar and the index specified by compile.


#Learning data with epoch3
# ~I think it will take about 15 minutes
import tensorflow as tf
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    history_linear=model_linear.fit_generator(generator=train_generator,
                                              steps_per_epoch=train_generator.n, 
                                              epochs=3, 
                                              validation_data=val_generator, 
                                              validation_steps=val_generator.n, 
                                              callbacks=[tensorboard_callback]
                                             )

image.png

The above results will give you an idea of what you are learning, but we will visualize the results for a more intuitive understanding.

Looking at the losses, we can see that overfitting is occurring because the validation results increase as learning progresses. On the other hand, note that the validation of ACC (correct answer rate) has not decreased monotonically. Discontinuous indicators like ACC are difficult to handle, so instead you are optimizing with an easy-to-use loss function.

#Visualization of results

#Definition of the function to plot the result
def plt_history(history,keys):
    history_dict = history.history
    n = len(keys)
    f, ax = plt.subplots(n,figsize=(8,4*n))
    for i in range(n):
        train_value = history_dict[keys[i][0]]
        val_value = history_dict[keys[i][1]]
        epochs = range(1, len(train_value) + 1)
        if n==1:
            ax.plot(epochs, train_value, 'bo',label = keys[i][0])
            ax.plot(epochs, val_value, 'b+',label = keys[i][1])
            ax.legend()
            ax.set_xlabel('Epochs')
            ax.set_ylabel(keys[i][0])
        else:
            ax[i].plot(epochs, train_value, 'bo',label = keys[i][0])
            ax[i].plot(epochs, val_value, 'b+',label = keys[i][1])
            ax[i].legend()
            ax[i].set_xlabel('Epochs')
            ax[i].set_ylabel(keys[i][0])

    plt.show()

#Visualization
plt_history(history_linear, [["loss","val_loss"],["acc","val_acc"]])

image.png

Model 2: Fully coupled model

In this model, one fully connected layer is added to make the layer deeper. We have also changed the optimization algorithm to Adam. (This should be experimentally compared to determine the optimum one.)

In addition, although we have defined and processed models sequentially earlier, in general, we define model types with classes and functions, and create different learning models with multiple different parameters and training data. Therefore, from now on, we will define the model with a function.

#Model definition
def get_fc_model():
    model = Sequential()
    model.add(Lambda(standardize,input_shape=(28,28,1)))
    model.add(Flatten())
    model.add(Dense(512, activation='relu'))
    model.add(Dense(10, activation='softmax'))
    model.compile(optimizer = Adam(), 
                  loss='categorical_crossentropy',
                  metrics=['accuracy'])
    return model


model_fc = get_fc_model()
model_fc.optimizer.lr=0.01
model_fc.summary()

image.png

Learn the model in the same way as before. If you want to try multiple models in a short time, you need to make a trade-off such as reducing the number of epochs. Looking at the results, we can see that the validation ACC is higher than before.


#Model learning
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    history_fc=model_fc.fit_generator(generator=train_generator, 
                                      steps_per_epoch=train_generator.n, 
                                      epochs=1, 
                                      validation_data=val_generator, 
                                      validation_steps=val_generator.n,
                                      callbacks=[tensorboard_callback]
                                     )

#Learning results
history_dict_fc = history_fc.history
history_dict_fc

image.png

Model 3: CNN model

Next, try the CNN model including the convolutional layer and the pooling layer. Since CNN can efficiently learn a wide space, high accuracy can be expected for image analysis problems. This time, we will prepare two types of layer depth and see the difference.

#Model definition


from keras.layers import Convolution2D, MaxPooling2D

#A model with two convolutions and two poolings
def get_cnn_model1():
    model = Sequential([
        Lambda(standardize, input_shape=(28,28,1)),
        Convolution2D(32,(3,3), activation='relu'),
        MaxPooling2D(),
        Convolution2D(64,(3,3), activation='relu'),
        MaxPooling2D(),
        Flatten(),
        Dense(512, activation='relu'),
        Dense(10, activation='softmax')
        ])
    model.compile(optimizer = Adam(), loss='categorical_crossentropy',
                  metrics=['accuracy'])
    return model

#A model with 3 convolutions and 3 poolings
def get_cnn_model2():
    model = Sequential([
        Lambda(standardize, input_shape=(28,28,1)),
        Convolution2D(32,(3,3), activation='relu'),
        MaxPooling2D(),
        Convolution2D(64,(3,3), activation='relu'),
        MaxPooling2D(),
        Convolution2D(128,(3,3), activation='relu'),
        MaxPooling2D(),
        Flatten(),
        Dense(512, activation='relu'),
        Dense(10, activation='softmax')
        ])
    model.compile(optimizer = Adam(), loss='categorical_crossentropy',
                  metrics=['accuracy'])
    return model

model_cnn1 = get_cnn_model1()
model_cnn2 = get_cnn_model2()
model_cnn1.optimizer.lr=0.01
model_cnn2.optimizer.lr=0.01

The shallow model learns 843,658 parameters

model_cnn1.summary()

image.png

The deeper model learns 163,850 parameters, which is less than model_cnn1. This is because the peripheral effect of the convolutional layer and the pooling layer greatly reduces the data input to the fully connected layer. In fact, looking at the dimensions of flatten, it was 1600 for cnn1, but only 128 for cnn2. If the size of the image is large, it is desirable to reduce the size of the data in this way, but what kind of result will be obtained with compact data like this time?

model_cnn2.summary()

image.png

We will study each.


with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    history_cnn1=model_cnn1.fit_generator(generator=train_generator, 
                                     steps_per_epoch=train_generator.n, 
                                     epochs=1, 
                                     validation_data=val_generator, 
                                     validation_steps=val_generator.n, 
                                     callbacks=[tensorboard_callback]
                                    )

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    history_cnn2=model_cnn2.fit_generator(generator=train_generator, 
                                     steps_per_epoch=train_generator.n, 
                                     epochs=1, 
                                     validation_data=val_generator, 
                                     validation_steps=val_generator.n, 
                                     #callbacks=[tensorboard_callback]
                                    )

Let's check the result.

Compared to the shallow CNN model, the deep CNN model has poorer loss and accuracy rate for both training and validation data. There are several possible reasons for this.

① The model is bad The reason why the model is bad is that the learning parameters have decreased. The fact that there are many parameters that can be adjusted means that you can create a more varied expression. This model has a small number of parameters, so it may not have been sufficiently expressive.

② The learning data is bad Overfitting is likely to occur when the number of training data is insufficient. This time, the correct answer rate of the training data is not so bad, but the validation result has a large divergence, which is a typical overfitting state. It may be possible to improve by expanding the data to be tried from now on.

③ Lack of learning It is possible that the model has not been fully trained yet. In this case, the loss of training data is larger than that of cnn1, so if the epoch is increased, training may proceed. However, the divergence from validation is another issue, so increasing the epoch in this state will not be an essential solution.

history_cnn1.history

image.png

history_cnn2.history

image.png

Data expansion

It is thought that the more training data there is, the higher the generalization performance = overfitting is less likely to occur. Data augumentation is a method of increasing the training data in a pseudo manner from a limited number of data. Data expansion is a technique for inflating data by making minor changes to the given data.

#Data expansion

from keras.preprocessing import image


DA_generator =image.ImageDataGenerator(rotation_range=10, 
                                 width_shift_range=0.1, 
                                 shear_range=0.1,
                                 height_shift_range=0.1, 
                                 zoom_range=0.1)
train_DA_generator = DA_generator.flow(X_train, y_train, batch_size=64)
val_DA_generator = DA_generator.flow(X_val, y_val, batch_size=64)

#Expanded data example
tmp_gen = DA_generator.flow(X_train[0].reshape((1,28,28,1)), batch_size = 1)
for i, tmp in enumerate(tmp_gen):
    plt.subplot(330 + (i+1))
    plt.imshow(tmp.reshape((28,28)), cmap=plt.get_cmap('gray'))
    if i == 8:
        break

image.png

I think it will take about 15 minutes to learn. As a result of data expansion, validation results have improved. The training data results and validation results improve for each epoch, so if you have time, try increasing the epoch and experiment.

#CNN 1 layer shallow CNN

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    model_cnn1.optimizer.lr=0.005
    history_cnn1_DA=model_cnn1.fit_generator(generator=train_DA_generator, 
                                          steps_per_epoch=train_DA_generator.n, 
                                          epochs=1, 
                                          validation_data=val_DA_generator, 
                                          validation_steps=val_DA_generator.n, 
                                          callbacks=[tensorboard_callback]
                                         )

history_cnn1_DA.history

image.png

Model 4: Batch normalization

In previous models, the input was standardized. However, that does not guarantee that the output of each layer is standardized. Then, after a certain layer, the output becomes very large and the parameters become very small. Or vice versa, the parameters can be very large.

In such a case, learning cannot be performed well, but this problem can be solved by adding a batch normalization layer that standardizes each layer. This is said to improve both generalization performance and learning speed.

The number of epochs is set to 1 this time as well, but if you have time to spare, increase the number of epochs and observe the condition of overfitting.


from keras.layers.normalization import BatchNormalization

def get_bn_model():
    model = Sequential([
        Lambda(standardize, input_shape=(28,28,1)),
        Convolution2D(32,(3,3), activation='relu'),
        BatchNormalization(axis=1),
        MaxPooling2D(),
        BatchNormalization(axis=1),
        Convolution2D(64,(3,3), activation='relu'),
        BatchNormalization(axis=1),
        MaxPooling2D(),
        BatchNormalization(axis=1),
        Flatten(),
        BatchNormalization(),
        Dense(512, activation='relu'),
        BatchNormalization(),
        Dense(10, activation='softmax')
        ])
    model.compile(optimizer = Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
    return model
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    model_bn= get_bn_model()
    model_bn.optimizer.lr=0.01
    history_bn=model_bn.fit_generator(generator=train_DA_generator, 
                                      steps_per_epoch=train_DA_generator.n, 
                                      epochs=1, 
                                      validation_data=val_DA_generator, 
                                      validation_steps=val_DA_generator.n,
                                      callbacks=[tensorboard_callback]
                                     )

history_bn.history

image.png

Model 5: Optimal model

Finally, I will introduce the model with the highest score among the models introduced in this article. Designing this model requires trial and error, such as adding layers and changing optimization functions as we have seen so far. This optimal model is a model that has undergone a series of trial and error (in my current capabilities and development environment).

A dropout layer, initial value setting of He, and a callback that changes the learning rate step by step are added.

As a result, the variation marks a 99.4% correct answer rate.


from keras.layers import Dense, Dropout, Flatten, Convolution2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.callbacks import ReduceLROnPlateau

def get_opt_model():
    model = Sequential([
        Lambda(standardize, input_shape=(28,28,1)),
        Convolution2D(32,(3,3), activation='relu',kernel_initializer='he_normal'),
        Convolution2D(32,(3,3), activation='relu',kernel_initializer='he_normal'),
        MaxPooling2D(),
        Dropout(0.20),
        Convolution2D(32,(3,3), activation='relu',kernel_initializer='he_normal'),
        Convolution2D(32,(3,3), activation='relu',kernel_initializer='he_normal'),
        MaxPooling2D(),
        Dropout(0.25),
        Convolution2D(32,(3,3), activation='relu',kernel_initializer='he_normal'),
        Dropout(0.25),
        Flatten(),
        Dense(128, activation='relu'),
        BatchNormalization(),
        Dropout(0.25),
        Dense(10, activation='softmax')
    ])
    model.compile(optimizer=Adam(),
                  loss='categorical_crossentropy', 
                  metrics=['accuracy'])
    return model


learning_rate_reduction = ReduceLROnPlateau(monitor='val_loss', 
                                            patience=3, 
                                            verbose=1, 
                                            factor=0.5, 
                                            min_lr=0.0001)
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    model_opt = get_opt_model()
    history_opt = model_opt.fit_generator(generator=train_DA_generator, 
                                          steps_per_epoch=train_DA_generator.n, 
                                          epochs=3, 
                                          validation_data=val_DA_generator, 
                                          validation_steps=val_DA_generator.n,
                                          callbacks=[tensorboard_callback, learning_rate_reduction]
                                         )
    Y_pred = model_opt.predict_classes(X_val,verbose = 0)
    Y_pred_prob = model_opt.predict(X_val,verbose = 0)

#Visualization of results
plt_history(history_opt, [["loss","val_loss"],["acc","val_acc"]])

image.png

history_opt.history

image.png

Check the result

There were some incorrect answers even in the optimal model, but what kind of data are they incorrect? Looking at the mixed matrix, it seems that there are almost no incorrect answers, but there are some cases where 1 is mistaken for 7 and 7 is mistaken for 2.

#Definition of a function to display the confusion matrix
import itertools
def plt_confusion_mtx(confusion_mtx):
    cmap=plt.cm.Reds
    title='Confusion matrix'
    f, ax = plt.subplots(1,figsize=(6,6))
    im = ax.imshow(confusion_mtx, interpolation='nearest', cmap=cmap)
    ax.set_title(title)
    ax.set_xticks(np.arange(10))
    ax.set_yticks(np.arange(10))
    ax.set_xlabel('Predicted label')
    ax.set_ylabel('True label')
    f.colorbar(im)
    thresh = confusion_mtx.max() / 2
    for i, j in itertools.product(range(confusion_mtx.shape[0]), range(confusion_mtx.shape[1])):
        ax.text(j, i, confusion_mtx[i, j],
                horizontalalignment="center",
                color="white" if confusion_mtx[i, j] > thresh else "black")

        

#Display of confusion matrix
from sklearn.metrics import confusion_matrix
Y_true = np.argmax(y_val,axis=1) 
confusion_mtx = confusion_matrix(Y_true, Y_pred) 
plt_confusion_mtx(confusion_mtx)

image.png

#Precision for each world class,Confirmation of recall etc.

from sklearn.metrics import classification_report
target_names = ["Class {}".format(i) for i in range(10)]
print(classification_report(Y_true, Y_pred, target_names=target_names))

image.png

When you actually look at the incorrect answer data, there are some cases where it is difficult for humans to distinguish it, but rather it is suspected that the labeling is wrong.

#Confirmation of incorrect answer data

errors = (Y_pred - Y_true != 0)
Y_pred_errors = Y_pred[errors]
Y_pred_prob_errors = Y_pred_prob[errors]
Y_true_errors = Y_true[errors]
X_val_errors = X_val[errors]

def display_errors(errors_index,img_errors,pred_errors, obs_errors):
    """ This function shows 6 images with their predicted and real labels"""
    n = 0
    nrows = 2
    ncols = 3
    fig, ax = plt.subplots(nrows,ncols, figsize = (8,8))
    for row in range(nrows):
        for col in range(ncols):
            error = errors_index[n]
            ax[row,col].imshow((img_errors[error]).reshape((28,28)))
            ax[row,col].set_title("Predicted label :{}\nTrue label :{}".format(pred_errors[error],obs_errors[error]))
            n += 1

errors = (Y_pred - Y_true != 0)
tmp = Y_pred_prob[errors] - to_categorical(Y_pred[errors])
display_index = np.argsort(tmp.max(axis=1))[:6]
display_errors(display_index, X_val_errors, Y_pred_errors, Y_true_errors)

image.png

It can be said that the optimum model has sufficient performance to withstand practical use.

in conclusion

This concludes the introduction to Deep Learning image analysis by Keras. From here on, you can try learning with more complex datasets, try transfer learning with trained models, try famous deep learning models, and much more. I will. If this article is useful to anyone, I would like to update these tutorial articles in the future, so please support me at LGTM.

Also, I am always looking for a job, so please let me know if you have the opportunity.

Recommended Posts

Deep learning image analysis starting with Kaggle and Keras
Sentiment analysis of tweets with deep learning
Challenge image classification with TensorFlow2 + Keras 9-Learning, saving and loading models-
Data analysis starting with python (data preprocessing-machine learning)
Put your own image data in Deep Learning and play with it
Machine learning starting from scratch (machine learning learned with Kaggle)
Parallel learning of deep learning by Keras and Kubernetes
Image recognition with keras
Extract music features with Deep Learning and predict tags
Classify anime faces by sequel / deep learning with Keras
Try deep learning with TensorFlow
Ensemble learning and basket analysis
Multiple regression analysis with Keras
Deep learning image recognition 1 theory
Reinforcement learning starting with Python
Deep Kernel Learning with Pyro
Try Deep Learning with FPGA
Try machine learning with Kaggle
Generate Pokemon with Deep Learning
Image recognition with Keras + OpenCV
Image classification with self-made neural network by Keras and PyTorch
Recognize your boss and hide the screen with Deep Learning
[Deep learning] Image classification with convolutional neural network [DW day 4]
Build GPU environment with GCP and kaggle official image (docker)
Deep Learning with Shogi AI on Mac and Google Colab
HIKAKIN and Max Murai with live game video and deep learning
Easy deep learning web app with NNC and Python + Flask
Try Deep Learning with FPGA-Select Cucumbers
Cat breed identification with deep learning
Deep Learning with Shogi AI on Mac and Google Colab Chapter 11
Deep Learning with Shogi AI on Mac and Google Colab Chapters 1-6
Predict Kaggle's Titanic with keras (kaggle ⑦)
Image segmentation with scikit-image and scikit-learn
I tried to make deep learning scalable with Spark × Keras × Docker
Deep Learning with Shogi AI on Mac and Google Colab Chapter 8
Try deep learning with TensorFlow Part 2
Deep Learning with Shogi AI on Mac and Google Colab Chapter 12 3
Deep Learning with Shogi AI on Mac and Google Colab Chapter 7
Deep Learning with Shogi AI on Mac and Google Colab Chapter 10 6-9
Deep Learning with Shogi AI on Mac and Google Colab Chapter 10
Deep Learning with Shogi AI on Mac and Google Colab Chapter 7 5-7
Deep Learning with Shogi AI on Mac and Google Colab Chapter 9
Solve three-dimensional PDEs with deep learning.
Deep Learning with Shogi AI on Mac and Google Colab Chapter 12 3
Compare DCGAN and pix2pix with keras
Deep Learning with Shogi AI on Mac and Google Colab Chapter 12 3
Deep Learning with Shogi AI on Mac and Google Colab Chapter 12 1-2
Organize machine learning and deep learning platforms
Deep Learning with Shogi AI on Mac and Google Colab Chapter 12 3
[Reading Notes] Hands-on Machine Learning with Scikit-Learn, Keras, and TensorFlow Chapter 1
Check squat forms with deep learning
Clash of Clans and image analysis (3)
Deep learning image recognition 2 model implementation
Categorize news articles with deep learning
Forecasting Snack Sales with Deep Learning
Make people smile with Deep Learning
Data analysis starting with python (data visualization 1)
Deep Learning with Shogi AI on Mac and Google Colab Chapter 12 3 ~ 5
Deep Learning with Shogi AI on Mac and Google Colab Chapter 7 9
Deep Learning with Shogi AI on Mac and Google Colab Chapter 8 5-9
Deep Learning with Shogi AI on Mac and Google Colab Chapter 8 1-4