Based on the TensorFlow tutorial (image recognition of handwritten image data), I wrote out the network data of Deep Learning and created a demo of handwriting recognition on Android.
Based on the model of "MNIST For ML Beginners" in the TensorFlow tutorial, first, write the learning data in Python on your PC.
"MNIST For ML Beginners" https://www.tensorflow.org/versions/master/tutorials/mnist/beginners/index.html
Based on the tutorial here, I modified the script for exporting graph data.
https://github.com/miyosuda/TensorFlowAndroidMNIST/blob/master/trainer-script/beginner.py
In order to export network data, it is necessary to export the graph information and the tensor data (learned content) contained in Variable together, but at the moment TensorFlow can save the graph information and Variable together. I couldn't seem to do it.
So, after learning, evaluate the contents of Viriables, convert it to ndarray once,
# Store variable
_W = W.eval(sess)
_b = b.eval(sess)
I converted the ndarray to Constant and used it as a substitute for Variables to reconstruct the graph and exported the graph and training data together.
#Regenerate graph g_2 = tf.Graph() with g_2.as_default():
x_2 = tf.placeholder("float", [None, 784], name="input")
Replace # Variables with Constant W_2 = tf.constant(_W, name="constant_W") b_2 = tf.constant(_b, name="constant_b")
y_2 = tf.nn.softmax(tf.matmul(x_2, W_2) + b_2, name="output")
sess_2 = tf.Session()
init_2 = tf.initialize_all_variables()
sess_2.run(init_2)
graph_def = g_2.as_graph_def()
tf.train.write_graph(graph_def, './tmp/beginner-export',
'beginner-graph.pb', as_text=False)
In order to call it on the Android side, I have named the input and output nodes as "input" and "output", respectively.
It took only a few seconds to train and export this model.
The Android demo originally included in TensorFlow could only be built in the Bazel environment, so I created an environment where Android apps can be built using only Android Studio and NDK.
I saved the library file (.a file) that can be created when building the Android sample of TensorFlow with Bazel so that it can be built only with NDK.
https://github.com/miyosuda/TensorFlowAndroidMNIST/tree/master/jni-build/jni
Android.mk looks like this.
Makefile
LOCAL_PATH := $(call my-dir)
include $(CLEAR_VARS)
TENSORFLOW_CFLAGS := -frtti \
-fstack-protector-strong \
-fpic \
-ffunction-sections \
-funwind-tables \
-no-canonical-prefixes \
'-march=armv7-a' \
'-mfpu=vfpv3-d16' \
'-mfloat-abi=softfp' \
'-std=c++11' '-mfpu=neon' -O2 \
TENSORFLOW_SRC_FILES := ./tensorflow_jni.cc \
./jni_utils.cc \
LOCAL_MODULE := tensorflow_mnist
LOCAL_ARM_MODE := arm
LOCAL_SRC_FILES := $(TENSORFLOW_SRC_FILES)
LOCAL_CFLAGS := $(TENSORFLOW_CFLAGS)
LOCAL_LDLIBS := \
-Wl,-whole-archive \
$(LOCAL_PATH)/libs/$(TARGET_ARCH_ABI)/libandroid_tensorflow_lib.a \
$(LOCAL_PATH)/libs/$(TARGET_ARCH_ABI)/libre2.a \
$(LOCAL_PATH)/libs/$(TARGET_ARCH_ABI)/libprotos_all_cc.a \
$(LOCAL_PATH)/libs/$(TARGET_ARCH_ABI)/libprotobuf.a \
$(LOCAL_PATH)/libs/$(TARGET_ARCH_ABI)/libprotobuf_lite.a \
-Wl,-no-whole-archive \
$(NDK_ROOT)/sources/cxx-stl/gnu-libstdc++/4.9/libs/$(TARGET_ARCH_ABI)/libgnustl_static.a \
$(NDK_ROOT)/sources/cxx-stl/gnu-libstdc++/4.9/libs/$(TARGET_ARCH_ABI)/libsupc++.a \
-llog -landroid -lm -ljnigraphics -pthread -no-canonical-prefixes '-march=armv7-a' -Wl,--fix-cortex-a8 -Wl,-S \
LOCAL_C_INCLUDES += $(LOCAL_PATH)/include $(LOCAL_PATH)/genfiles $(LOCAL_PATH)/include/third_party/eigen3
NDK_MODULE_PATH := $(call my-dir)
include $(BUILD_SHARED_LIBRARY)
If you do not set the compiler option and linker option as above, the training data (Protocol Buffers data) could not be read correctly even if the build passed.
Prepare 28x28 handwritten pixel data on the Java side, pass it to the c ++ side with JNI, and input it to the graph created based on the graph data.
https://github.com/miyosuda/TensorFlowAndroidMNIST/blob/master/jni-build/jni/tensorflow_jni.cc
I was able to recognize it safely.
With the above model, the recognition rate is about 91%, so I replaced it with a model using Deep Learning (recognition rate 99.2%) in "Deep MNIST for Experts" of TensorFlow.
"Deep MNIST for Experts" https://www.tensorflow.org/versions/master/tutorials/mnist/pros/index.html
Script for writing training data https://github.com/miyosuda/TensorFlowAndroidMNIST/blob/master/trainer-script/expert.py
If the DropOut node is included, an error occurred for some reason when executing on the Android side, and since the DropOut node is originally necessary only for learning, I removed it from the graph when exporting.
It took about an hour to study in my environment (MacBook Pro).
Since the names of the input node and output node are the same, after exporting, all the code on the Android side can be executed as it is by simply replacing the training data.
When I tried handwriting recognition, I was able to confirm that it recognized numbers from 0 to 9 fairly accurately.
Click here for the above set of sauces https://github.com/miyosuda/TensorFlowAndroidMNIST
Recommended Posts