[TF] How to load / save Model and Parameter in Keras

Save & Load Model

The built ** Model ** can be saved as text in ** json ** file format or ** yaml ** file format. It is also possible to load the saved file and rebuild the Model.

Use ** model.to_json () ** / ** model.to_yaml () ** to save. Here model is the model you built yourself. model.to_json () / model.to_yaml () returns a string so you have to save it to a file yourself.

Use ** model_from_json () ** / ** model_from_yaml () ** when loading. Again, you need to pass a string as an argument, and you have to read the file yourself.

** Save and load (json) **

json_string = model.to_json()

model = model_from_json(json_string)

Save & load Parameters

Use ** save_weights ** / ** load_weights ** to save and load the learned Parameters.

model.save_weights('param.hdf5')

model.load_weights('param.hdf5')

Use ** Callback ** to save parameters during training. The Callback used is ** Model Checkpoint **. The callback is called at the end of every epoch.

Arguments

arguments description
filepath Save file name
monitor Specify the value to check. For example, monitor='val_loss'
verbose Specify whether to comment on standard output when saving
save_best_only Specify whether to save only when the accuracy improves. If False, every epoch is saved.
mode Specify what happens to the state of the variable being checked (for example, if the precision is large, specify max because you want to save it, and in the case of loss, specify min because it is the opposite. Named as auto Judge from.

If the filepath has the same name, it will be overwritten, so the value of the specified variable will be automatically entered to change the name. The variables that can be specified are epoch, loss, acc, val_loss, val_acc.

For example, if you specify the filepath as below, the value at that time will be entered automatically. filepath = 'weights.{epoch:02d}-{loss:.2f}-{acc:.2f}-{val_loss:.2f}-{val_acc:.2f}.hdf5'

Code example

Build and learn Model

The example below builds a Model, saves the weights being trained with Callback, and finally saves the Model and Weights.

import numpy as np
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers.core import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.utils import np_utils
from keras.optimizers import Adam
import keras.callbacks
import keras.backend.tensorflow_backend as KTF
import tensorflow as tf
import os.path

batch_size = 128
nb_classes = 10
nb_epoch = 20

img_rows = 28
img_cols = 28

f_log = './log'
f_model = './model'
(X_train, y_train), (X_test, y_test) = mnist.load_data()

X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test  = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test  = X_test.astype('float32')
X_train /= 255
X_test /= 255

Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)

old_session = KTF.get_session()

with tf.Graph().as_default():
    session = tf.Session('')
    KTF.set_session(session)

    model = Sequential()

    model.add(Convolution2D(32, 3, 3, border_mode = 'valid', input_shape=(1, img_rows, img_cols)))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Convolution2D(64, 3, 3, border_mode = 'valid'))
    model.add(Activation('relu'))
    model.add(MaxPooling2D(pool_size=(2,2)))
    model.add(Flatten())
    model.add(Dense(128))
    model.add(Activation('relu'))
    model.add(Dense(10))
    model.add(Activation('softmax'))

    model.summary()

    model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.001, beta_1=0.5), metrics=['accuracy'])
    
    tb_cb = keras.callbacks.TensorBoard(log_dir=f_log, histogram_freq=1)
    cp_cb = keras.callbacks.ModelCheckpoint(filepath = os.path.join(f_model,'cnn_model{epoch:02d}-loss{loss:.2f}-acc{acc:.2f}-vloss{val_loss:.2f}-vacc{val_acc:.2f}.hdf5'), monitor='val_loss', verbose=1, save_best_only=True, mode='auto')
    cbks = [tb_cb, cp_cb]

    history = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, callbacks=cbks, validation_data=(X_test, Y_test))
    score = model.evaluate(X_test, Y_test, verbose=0)
    print('Test score:', score[0])
    print('Test accuracy:', score[1])

    print('save the architecture of a model')
    json_string = model.to_json()
    open(os.path.join(f_model,'cnn_model.json'), 'w').write(json_string)
    yaml_string = model.to_yaml()
    open(os.path.join(f_model,'cnn_model.yaml'), 'w').write(yaml_string)
    print('save weights')
    model.save_weights(os.path.join(f_model,'cnn_model_weights.hdf5'))
KTF.set_session(old_session)

The execution result looks like this.

____________________________________________________________________________________________________
Layer (type)                       Output Shape        Param #     Connected to                     
====================================================================================================
convolution2d_1 (Convolution2D)    (None, 32, 26, 26)  320         convolution2d_input_1[0][0]      
____________________________________________________________________________________________________
activation_1 (Activation)          (None, 32, 26, 26)  0           convolution2d_1[0][0]            
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D)      (None, 32, 13, 13)  0           activation_1[0][0]               
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D)    (None, 64, 11, 11)  18496       maxpooling2d_1[0][0]             
____________________________________________________________________________________________________
activation_2 (Activation)          (None, 64, 11, 11)  0           convolution2d_2[0][0]            
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D)      (None, 64, 5, 5)    0           activation_2[0][0]               
____________________________________________________________________________________________________
flatten_1 (Flatten)                (None, 1600)        0           maxpooling2d_2[0][0]             
____________________________________________________________________________________________________
dense_1 (Dense)                    (None, 128)         204928      flatten_1[0][0]                  
____________________________________________________________________________________________________
activation_3 (Activation)          (None, 128)         0           dense_1[0][0]                    
____________________________________________________________________________________________________
dense_2 (Dense)                    (None, 10)          1290        activation_3[0][0]               
____________________________________________________________________________________________________
activation_4 (Activation)          (None, 10)          0           dense_2[0][0]                    
====================================================================================================
Total params: 225034
____________________________________________________________________________________________________
Train on 60000 samples, validate on 10000 samples
Epoch 1/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.1610 - acc: 0.9524Epoch 00000: val_loss improved from inf to 0.05515, saving model to ./model/cnn_model00-loss0.16-acc0.95-vloss0.06-vacc0.98.hdf5
60000/60000 [==============================] - 24s - loss: 0.1609 - acc: 0.9525 - val_loss: 0.0552 - val_acc: 0.9830
Epoch 2/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0478 - acc: 0.9853Epoch 00001: val_loss improved from 0.05515 to 0.04094, saving model to ./model/cnn_model01-loss0.05-acc0.99-vloss0.04-vacc0.99.hdf5
60000/60000 [==============================] - 24s - loss: 0.0477 - acc: 0.9853 - val_loss: 0.0409 - val_acc: 0.9871
Epoch 3/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0338 - acc: 0.9891Epoch 00002: val_loss improved from 0.04094 to 0.03424, saving model to ./model/cnn_model02-loss0.03-acc0.99-vloss0.03-vacc0.99.hdf5
60000/60000 [==============================] - 23s - loss: 0.0338 - acc: 0.9891 - val_loss: 0.0342 - val_acc: 0.9890
Epoch 4/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0248 - acc: 0.9918Epoch 00003: val_loss improved from 0.03424 to 0.02830, saving model to ./model/cnn_model03-loss0.02-acc0.99-vloss0.03-vacc0.99.hdf5
60000/60000 [==============================] - 23s - loss: 0.0248 - acc: 0.9918 - val_loss: 0.0283 - val_acc: 0.9898
Epoch 5/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0192 - acc: 0.9940Epoch 00004: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0192 - acc: 0.9940 - val_loss: 0.0286 - val_acc: 0.9908
Epoch 6/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0145 - acc: 0.9954Epoch 00005: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0145 - acc: 0.9954 - val_loss: 0.0300 - val_acc: 0.9914
Epoch 7/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0119 - acc: 0.9961Epoch 00006: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0119 - acc: 0.9961 - val_loss: 0.0396 - val_acc: 0.9881
Epoch 8/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0097 - acc: 0.9968Epoch 00007: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0097 - acc: 0.9969 - val_loss: 0.0302 - val_acc: 0.9901
Epoch 9/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0075 - acc: 0.9977Epoch 00008: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0075 - acc: 0.9976 - val_loss: 0.0400 - val_acc: 0.9877
Epoch 10/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0081 - acc: 0.9973Epoch 00009: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0081 - acc: 0.9972 - val_loss: 0.0352 - val_acc: 0.9905
Epoch 11/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0058 - acc: 0.9979Epoch 00010: val_loss did not improve
60000/60000 [==============================] - 24s - loss: 0.0058 - acc: 0.9979 - val_loss: 0.0359 - val_acc: 0.9912
Epoch 12/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0056 - acc: 0.9981Epoch 00011: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0056 - acc: 0.9981 - val_loss: 0.0346 - val_acc: 0.9915
Epoch 13/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0055 - acc: 0.9983Epoch 00012: val_loss improved from 0.02830 to 0.02716, saving model to ./model/cnn_model12-loss0.01-acc1.00-vloss0.03-vacc0.99.hdf5
60000/60000 [==============================] - 23s - loss: 0.0055 - acc: 0.9983 - val_loss: 0.0272 - val_acc: 0.9926
Epoch 14/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0029 - acc: 0.9992Epoch 00013: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0029 - acc: 0.9992 - val_loss: 0.0365 - val_acc: 0.9917
Epoch 15/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0052 - acc: 0.9983Epoch 00014: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0052 - acc: 0.9983 - val_loss: 0.0357 - val_acc: 0.9916
Epoch 16/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0047 - acc: 0.9986Epoch 00015: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0047 - acc: 0.9987 - val_loss: 0.0311 - val_acc: 0.9922
Epoch 17/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0038 - acc: 0.9988Epoch 00016: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0038 - acc: 0.9988 - val_loss: 0.0424 - val_acc: 0.9905
Epoch 18/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0040 - acc: 0.9986Epoch 00017: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 0.0040 - acc: 0.9986 - val_loss: 0.0382 - val_acc: 0.9922
Epoch 19/20
59904/60000 [============================>.] - ETA: 0s - loss: 5.6678e-04 - acc: 0.9999Epoch 00018: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 5.6587e-04 - acc: 1.0000 - val_loss: 0.0379 - val_acc: 0.9926
Epoch 20/20
59904/60000 [============================>.] - ETA: 0s - loss: 3.6203e-04 - acc: 1.0000Epoch 00019: val_loss did not improve
60000/60000 [==============================] - 23s - loss: 3.6146e-04 - acc: 1.0000 - val_loss: 0.0379 - val_acc: 0.9930
('Test score:', 0.037918671134550642)
('Test accuracy:', 0.99299999999999999)
save the architecture of a model
save weights

Load Model and Weight from a file

import numpy as np
from keras.datasets import mnist
from keras.models import model_from_json
from keras.utils import np_utils
from keras.optimizers import Adam
import keras.callbacks
import keras.backend.tensorflow_backend as KTF
import tensorflow as tf
import os.path

batch_size = 128
nb_classes = 10
nb_epoch = 3

img_rows = 28
img_cols = 28

f_log = './log'
f_model = './model'
model_filename = 'cnn_model.json'
weights_filename = 'cnn_model_weights.hdf5'

(X_train, y_train), (X_test, y_test) = mnist.load_data()

X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test  = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
X_train = X_train.astype('float32')
X_test  = X_test.astype('float32')
X_train /= 255
X_test /= 255

Y_train = np_utils.to_categorical(y_train, nb_classes)
Y_test = np_utils.to_categorical(y_test, nb_classes)

old_session = KTF.get_session()

with tf.Graph().as_default():
    session = tf.Session('')
    KTF.set_session(session)

    json_string = open(os.path.join(f_model, model_filename)).read()
    model = model_from_json(json_string)

    model.summary()

    model.compile(loss='categorical_crossentropy', optimizer=Adam(lr=0.001, beta_1=0.5), metrics=['accuracy'])

    model.load_weights(os.path.join(f_model,weights_filename))

    cbks = []

    history = model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch, verbose=1, callbacks=cbks, validation_data=(X_test, Y_test))
    score = model.evaluate(X_test, Y_test, verbose=0)
    print('Test score:', score[0])
    print('Test accuracy:', score[1])
    
KTF.set_session(old_session)

The following is the execution result. Since the learned Parameters are read and learned, the accuracy is high from the beginning.

____________________________________________________________________________________________________
Layer (type)                       Output Shape        Param #     Connected to                     
====================================================================================================
convolution2d_1 (Convolution2D)    (None, 32, 26, 26)  320         convolution2d_input_1[0][0]      
____________________________________________________________________________________________________
activation_1 (Activation)          (None, 32, 26, 26)  0           convolution2d_1[0][0]            
____________________________________________________________________________________________________
maxpooling2d_1 (MaxPooling2D)      (None, 32, 13, 13)  0           activation_1[0][0]               
____________________________________________________________________________________________________
convolution2d_2 (Convolution2D)    (None, 64, 11, 11)  18496       maxpooling2d_1[0][0]             
____________________________________________________________________________________________________
activation_2 (Activation)          (None, 64, 11, 11)  0           convolution2d_2[0][0]            
____________________________________________________________________________________________________
maxpooling2d_2 (MaxPooling2D)      (None, 64, 5, 5)    0           activation_2[0][0]               
____________________________________________________________________________________________________
flatten_1 (Flatten)                (None, 1600)        0           maxpooling2d_2[0][0]             
____________________________________________________________________________________________________
dense_1 (Dense)                    (None, 128)         204928      flatten_1[0][0]                  
____________________________________________________________________________________________________
activation_3 (Activation)          (None, 128)         0           dense_1[0][0]                    
____________________________________________________________________________________________________
dense_2 (Dense)                    (None, 10)          1290        activation_3[0][0]               
____________________________________________________________________________________________________
activation_4 (Activation)          (None, 10)          0           dense_2[0][0]                    
====================================================================================================
Total params: 225034
____________________________________________________________________________________________________
Train on 60000 samples, validate on 10000 samples
Epoch 1/3
60000/60000 [==============================] - 17s - loss: 0.0024 - acc: 0.9993 - val_loss: 0.0442 - val_acc: 0.9916
Epoch 2/3
60000/60000 [==============================] - 17s - loss: 0.0049 - acc: 0.9986 - val_loss: 0.0401 - val_acc: 0.9914
Epoch 3/3
60000/60000 [==============================] - 17s - loss: 0.0031 - acc: 0.9991 - val_loss: 0.0424 - val_acc: 0.9912
('Test score:', 0.042397765618482915)
('Test accuracy:', 0.99119999999999997)

An example of being able to set the save frequency with your own Callback

Let's make a Callback so that we can set the save frequency of Weight. To make a Callback, inherit from keras.callbacks.Callback. Since the timing when Callback is called is fixed and the method name corresponding to it is also fixed, just overwrite the part you want to change. You can call the inheritance source if necessary.

method description
on_epoch_begin Called at the start of epoch.
on_epoch_end Called at the end of epoch.
on_batch_begin Called at the start of batch.
on_batch_end Called at the end of batch.
on_train_begin Called at the beginning of learning.
on_train_end Called at the end of learning.
_set_params Called at the start of learning, Model information is passed as an argument. I don't use it much. In Callback of TensorBoard, histogram at this time_Calling summary.

The callback looks like this: It just receives the Call frequency first and then calls the inherited on_epoch_end according to the frequency with on_epoch_end.

class ModelCheckpointEx(keras.callbacks.ModelCheckpoint):
    def __init__(self, filepath, verbose=0, save_freq=1):
        super(ModelCheckpointEx, self).__init__(filepath, verbose=verbose)
        self.save_freq = save_freq
        
    def on_epoch_end(self, epoch, logs={}):
        if epoch % self.save_freq == 0:
            super(ModelCheckpointEx, self).on_epoch_end(epoch, logs=logs)

The usage is the same as before, so I will omit the code. The execution result is as follows. You can see that Log is output once every two times.

Train on 60000 samples, validate on 10000 samples
Epoch 1/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.1646 - acc: 0.9498Epoch 00000: saving model to ./model/cnn_model00-loss0.16-acc0.95-vloss0.06-vacc0.98.hdf5
60000/60000 [==============================] - 16s - loss: 0.1645 - acc: 0.9499 - val_loss: 0.0613 - val_acc: 0.9820
Epoch 2/20
60000/60000 [==============================] - 16s - loss: 0.0524 - acc: 0.9833 - val_loss: 0.0396 - val_acc: 0.9867
Epoch 3/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0361 - acc: 0.9889Epoch 00002: saving model to ./model/cnn_model02-loss0.04-acc0.99-vloss0.04-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0361 - acc: 0.9889 - val_loss: 0.0353 - val_acc: 0.9877
Epoch 4/20
60000/60000 [==============================] - 16s - loss: 0.0269 - acc: 0.9913 - val_loss: 0.0306 - val_acc: 0.9900
Epoch 5/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0203 - acc: 0.9937Epoch 00004: saving model to ./model/cnn_model04-loss0.02-acc0.99-vloss0.04-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0203 - acc: 0.9937 - val_loss: 0.0422 - val_acc: 0.9871
Epoch 6/20
60000/60000 [==============================] - 16s - loss: 0.0174 - acc: 0.9942 - val_loss: 0.0315 - val_acc: 0.9893
Epoch 7/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0119 - acc: 0.9962Epoch 00006: saving model to ./model/cnn_model06-loss0.01-acc1.00-vloss0.03-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0119 - acc: 0.9962 - val_loss: 0.0329 - val_acc: 0.9901
Epoch 8/20
60000/60000 [==============================] - 16s - loss: 0.0100 - acc: 0.9967 - val_loss: 0.0337 - val_acc: 0.9881
Epoch 9/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0082 - acc: 0.9975Epoch 00008: saving model to ./model/cnn_model08-loss0.01-acc1.00-vloss0.04-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0081 - acc: 0.9975 - val_loss: 0.0360 - val_acc: 0.9890
Epoch 10/20
60000/60000 [==============================] - 16s - loss: 0.0082 - acc: 0.9974 - val_loss: 0.0305 - val_acc: 0.9911
Epoch 11/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0071 - acc: 0.9977Epoch 00010: saving model to ./model/cnn_model10-loss0.01-acc1.00-vloss0.03-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0071 - acc: 0.9977 - val_loss: 0.0348 - val_acc: 0.9903
Epoch 12/20
60000/60000 [==============================] - 16s - loss: 0.0073 - acc: 0.9974 - val_loss: 0.0313 - val_acc: 0.9913
Epoch 13/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0040 - acc: 0.9988Epoch 00012: saving model to ./model/cnn_model12-loss0.00-acc1.00-vloss0.03-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0040 - acc: 0.9988 - val_loss: 0.0348 - val_acc: 0.9915
Epoch 14/20
60000/60000 [==============================] - 16s - loss: 0.0027 - acc: 0.9993 - val_loss: 0.0457 - val_acc: 0.9891
Epoch 15/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0067 - acc: 0.9977Epoch 00014: saving model to ./model/cnn_model14-loss0.01-acc1.00-vloss0.05-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0066 - acc: 0.9977 - val_loss: 0.0483 - val_acc: 0.9891
Epoch 16/20
60000/60000 [==============================] - 16s - loss: 0.0046 - acc: 0.9984 - val_loss: 0.0358 - val_acc: 0.9902
Epoch 17/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0022 - acc: 0.9994Epoch 00016: saving model to ./model/cnn_model16-loss0.00-acc1.00-vloss0.04-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0022 - acc: 0.9995 - val_loss: 0.0394 - val_acc: 0.9910
Epoch 18/20
60000/60000 [==============================] - 16s - loss: 0.0046 - acc: 0.9986 - val_loss: 0.0398 - val_acc: 0.9895
Epoch 19/20
59904/60000 [============================>.] - ETA: 0s - loss: 0.0029 - acc: 0.9992Epoch 00018: saving model to ./model/cnn_model18-loss0.00-acc1.00-vloss0.04-vacc0.99.hdf5
60000/60000 [==============================] - 16s - loss: 0.0029 - acc: 0.9992 - val_loss: 0.0419 - val_acc: 0.9915
Epoch 20/20
60000/60000 [==============================] - 16s - loss: 0.0027 - acc: 0.9992 - val_loss: 0.0368 - val_acc: 0.9911
('Test score:', 0.036752203683738938)
('Test accuracy:', 0.99109999999999998)

Recommended Posts

[TF] How to load / save Model and Parameter in Keras
[TF] How to save and load Tensorflow learning parameters
[ROS2] How to describe remap and parameter in python format launch
How to split and save a DataFrame
[TF] How to use Tensorboard from Keras
How to use is and == in Python
It's really useful to add save () and load () methods to Target in Luigi
I implemented the VGG16 model in Keras and tried to identify CIFAR10
How to generate permutations in Python and C ++
How to write async and await in Vue.js
[TF] How to build Tensorflow in Proxy environment
How to plot autocorrelation and partial autocorrelation in python
Allow Keras 2.0 and OpenCV 3.2 to work in GPU environment
How to get multiple model objects randomly in Django
How to define Decorator and Decomaker in one function
A memorandum on how to use keras.preprocessing.image in Keras
How to load files in Google Drive with Google Colaboratory
[Python] How to sort dict in list and instance in list
How to use the model learned in Lobe in Python
How to use Decorator in Django and how to make it
How to use Spacy Japanese model in Google Colaboratory
How to get RGB and HSV histograms in OpenCV
Save the pystan model and results in a pickle file
[TensorFlow 2 / Keras] How to run learning with CTC Loss in Keras
How to swap elements in an array in Python, and how to reverse an array.
Foreigners talk: How to name classes and methods in English
How to interactively draw a machine learning pipeline with scikit-learn and save it in HTML
How to use pyenv and pyenv-virtualenv in your own way
[Introduction to Udemy Python 3 + Application] 36. How to use In and Not
A note on how to load a virtual environment in PyCharm
How to create and use static / dynamic libraries in C
Comparison of how to use higher-order functions in Python 2 and 3
How to get all the keys and values in the dictionary
[Blender] How to handle mouse and keyboard events in Blender scripts
For beginners, how to deal with common errors in keras
How to develop in Python
How to execute external shell scripts and commands in python
How to create dataframes and mess with elements in pandas
How to log in to AtCoder with Python and submit automatically
How to return the data contained in django model in json format and map it on leaflet
How to auto-update App Store description in Google Sheets and Fastlane
How to compare lists and retrieve common elements in a list
Repeated @ app.callback in Dash How to write Input and State neatly
How to give and what the constraints option in scipy.optimize.minimize is
How to use functions in separate files Perl and Python versions
How to specify a .py file to load at startup in IPython 0.13
Difference in how to write if statement between ruby ​​and python
How to display bytes in the same way in Java and Python
[Python] How to do PCA in Python
How to handle session in SQLAlchemy
How to install and use Tesseract-OCR
How to use classes in Theano
How to write soberly in pandas
How to collect images in Python
How to update Spyder in Anaconda
How to use SQLite in Python
How to convert 0.5 to 1056964608 in one shot
How to install and configure blackbird
How to use .bash_profile and .bashrc
How to install CUDA and nvidia-driver
How to kill processes in bulk