In this article, I will show you how to use the trained model of tensorflow 2.0 with Kotlin. The sample code is Kotlin only, but I think it will work in Java in a similar way.
In the method introduced this time, KerasModelImport of the library called deeplearning4j is used. Since Java API exists in tensorflow, it is not necessary to use minor libraries such as deeplearning4j ~~, but since the tensorflow2.0 build for Java has not been distributed yet, deeplearning4j is tentatively used. I will use it. (* If you build it yourself, you may be able to use Java API compatible with tensorflow 2.0)
In other words
I want to run Deep Learning inference processing in Kotlin / Java! But I don't want to write learning code with deeplearning4j! !! I can't wait for the distribution of the tensorflow 2.0 build for Java! !! !!
I hope you can think of it as a connection measure for such a person.
Both tensorflow and deeplearning4j change drastically from version to version, so there is a high possibility that the behavior will change depending on the version.
tensorflow(Python) : 2.1.0
deeplearning4j(Kotlin/Java) : 1.0.0-beta7
As a sample, the Python code for learning MNIST with tensorflow 2.0 is shown below.
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Dense, Input
from tensorflow.keras import Model
#If it is not float64, it may not work properly on deeplearning4j.
tf.keras.backend.set_floatx('float64')
#Sequential, SubClassed model fails to import with deeplearning4j
def make_functional_model(data_size, target_size):
inputs = Input(data_size)
fc1 = Dense(512, activation='relu')
fc2 = Dense(target_size, activation='softmax')
outputs = fc2(fc1(inputs))
return Model(inputs=inputs, outputs=outputs)
def train(dataset, model, criterion, optimizer):
for data, target in dataset:
with tf.GradientTape() as tape:
output = model(data)
loss = criterion(target, output)
grads = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
def main():
#Preparing for MNIST
mnist = tf.keras.datasets.mnist
(train_data, train_target), _ = mnist.load_data()
train_data = train_data / 255.0
train_data = np.reshape(train_data, (train_data.shape[0], -1))
train_dataset = tf.data.Dataset.from_tensor_slices((train_data, train_target)).batch(32)
#Model preparation
data_size = train_data.shape[1]
target_size = 10
model = make_functional_model(data_size, target_size)
#Learning
criterion = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
for epoch in range(5):
train(train_dataset, model, criterion, optimizer)
#Saved Model fails to import on deeplearning4j
model.save('checkpoint.h5')
if __name__ == '__main__':
main()
That's all for the learning code. In addition, normal operation was confirmed with deeplearning4j only when "Save Functional model in hdf5 format". It was not possible to import using the model description in Sequential or SubClassed format or the saved format of Saved Model.
Next is the inference code. This is Kotlin using deeplearning4j.
import org.deeplearning4j.datasets.iterator.impl.MnistDataSetIterator
import org.deeplearning4j.nn.modelimport.keras.KerasModelImport
fun main() {
val mnist = MnistDataSetIterator(1, false, 123)
val model = KerasModelImport.importKerasModelAndWeights("checkpoint.h5")
//val model = KerasModelImport.importKerasSequentialModelAndWeights("checkpoint.h5")
mnist.forEach {
val data = it.features
val output = model.output(data)[0].toFloatVector()
val pred = output.indexOf(output.max()!!)
}
}
Just import the model and infer with the output function. It's very easy to use. There is also an import function for the Sequential model, but as mentioned above, the import fails.
In this article, I introduced how to use the trained model of tensorflow 2.0 with Kotlin / Java. I hope that the Java API of tensorflow will be available in a stable manner, and I will endure it now. ..
And let's all do Deep Learning with Kotlin! (This is what I want to say the most)
Recommended Posts