This time, I will use TensorFlowLite on Android (Java) to classify images! If you make a mistake in the code etc. and have any improvements, please let us know!
According to the TensorFlow Lite Guide, ...
TensorFlow Lite is a toolset for using TensorFlow models on smartphones, IoT devices, etc.
** TensorFlow Lite Interpreter Runs a model specifically optimized for many different hardware types, including mobile phones, embedded Linux devices, and microcontrollers. ** **
You can transform your TensorFlow model into an efficient format for use by the TensorFlow Light Converter interpreter and introduce optimizations to improve binary size and performance.
** ⇒ In other words, it's a lightweight version of TensorFlow that can be easily executed not only on PCs but also on smartphones and Iot devices! ** ** In the future, you can even learn with just your smartphone! Wow!
Prepare a model that has been trained in TensorFlow. This time, I will use the hosted model, so I will omit it!
TensorFlow Lite cannot use the TensorFlow model as it is, so convert it to a dedicated format (tflite). ~~ Please refer to the article here for conversion methods. ~~ Since the article was deleted, I will post the article I wrote and the official article. Model transformation method Official article
This time, I will explain how to embed Android (Java)!
Please use any name for the project name, etc.! This time, we will use AndroidX. If you check "Use android x. * artifacts", it's OK. You don't have to use Android X as it is optional.
In build.gradle under the app directory
build.gradle(app)
dependencies {
implementation 'org.tensorflow:tensorflow-lite:0.0.0-nightly'
implementation 'org.tensorflow:tensorflow-lite-gpu:0.0.0-nightly'
}
To add. As it stands, it contains ABI for all CPUs and instruction sets, but "armeab-v7a" and "arm64-v8a" If it is included, it can cover most Android devices, so set it not to include other ABIs. It doesn't matter if it's included, but it's recommended as it reduces the size of the app.
build.gradle(app)
android {
defaultConfig {
ndk {
abiFilters 'armeabi-v7a', 'arm64-v8a'
}
}
}
For ABI, please refer to the article here for easy understanding.
Android compresses what is in the asset folder, so if you put the model in the asset folder, it will be compressed and cannot be read. Therefore, I will specify not to compress the tflite file.
build.gradle(app)
android {
defaultConfig {
aaptOptions {
noCompress "tflite"
}
}
}
Place the model and label_text in the asset folder. Please download the model from here.
First, create an asset folder. Copy the file from the unzipped folder. After copying, rename it to "model.tflite" and "labels.txt".
This completes the model installation.
Android Sample of TensorFlow Lite 3 classes and [here](https://github.com/tensorflow/examples/blob/master/lite/examples/image_classification/android/app/src/main/java/org/tensorflow/lite/examples/ Copy Logger.java from classification / env / Logger.java). If you just copy it, you will get an error. Rewrite the import destination of Logger class with Classifier.java.
Classfier.java
import org.tensorflow.lite.Interpreter;
//Delete here import org.tensorflow.lite.examples.classification.env.Logger;
import org.tensorflow.lite.gpu.GpuDelegate;
/** A classifier specialized to label images using TensorFlow Lite. */
public abstract class Classifier {
private static final Logger LOGGER = new Logger();
If you delete it, Android Studio will ask you something like this, so if you press "Alt + Enter", it will be imported automatically. When importing
I think there will be two types, so select the one that does not say <Android API ~ Platform> (android.jar).
I think all the errors are now gone.
ClassifierFloatMobileNet.java ClassifierQuantizedMobileNet.java Changed the loading part of the model that is common to the two
*** Original ***
java:ClassifierFloatMobileNet.java,ClassifierQuantizedMobileNet.java
@Override
protected String getModelPath() {
// you can download this file from
// see build.gradle for where to obtain this file. It should be auto
// downloaded into assets.
return "mobilenet_v1_1.0_224.tflite";
}
*** After change ***
ClassifierFloatMobileNet.java,ClassifierQuantizedMobileNet
@Override
protected String getModelPath() {
// you can download this file from
// see build.gradle for where to obtain this file. It should be auto
// downloaded into assets.
return "model.tflite";
}
Arrange TextView, Button, ImageView like this. Set onClick for Button and press it. ↑ Is it better for listeners to set onClick? Please tell me a detailed person
activity_main.xml
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical">
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="horizontal">
<TextView
android:id="@+id/textView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_weight="1"
android:text="TextView" />
</LinearLayout>
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="horizontal">
<Button
android:id="@+id/button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_weight="1"
android:onClick="select"
android:text="Select an image" />
</LinearLayout>
<LinearLayout
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:orientation="horizontal">
<ImageView
android:id="@+id/imageView"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_weight="1"
tools:srcCompat="@tools:sample/avatars" />
</LinearLayout>
</LinearLayout>
</androidx.constraintlayout.widget.ConstraintLayout>
First, let's declare the variables to use!
MainActivity.java
ImageView imageView;
TextView textView;
Classifier classifier;
private static final int RESULT_IMAGEFILE = 1001; //Request code used when acquiring images
Associate textview and ImageView in onCreate.
MainActivity.java
imageView = findViewById(R.id.imageView);
textView = findViewById(R.id.textView);
Then make a call to Classfier.
MainActivity.java
try {
classifier = Classifier.create(this,QUANTIZED,CPU,2);
} catch (IOException e) {
e.printStackTrace();
}
The arguments specify Acritivy, Model type, device used for calculation, and number of threads to use. Basically, this setting will work, but let's change it flexibly.
After pressing the button, open the gallery and skip the Intent so you can select the image.
MainAcritivy.java
public void image(View V){
Intent intent = new Intent(Intent.ACTION_OPEN_DOCUMENT);
intent.addCategory(Intent.CATEGORY_OPENABLE);
intent.setType("image/*");
startActivityForResult(intent, RESULT_IMAGEFILE);
}
For more information about this, click here (https://qiita.com/yukiyamadajp/items/137d15a4e65ed2308787)
When you come back from the gallery, get the image and process it.
MainAcritivty.java
@Override
public void onActivityResult(int requestCode, int resultCode, Intent resultData) {
super.onActivityResult(requestCode, resultCode, resultData);
if (requestCode == RESULT_IMAGEFILE && resultCode == Activity.RESULT_OK) {
if (resultData.getData() != null) {
ParcelFileDescriptor pfDescriptor = null;
try {
Uri uri = resultData.getData();
pfDescriptor = getContentResolver().openFileDescriptor(uri, "r");
if (pfDescriptor != null) {
FileDescriptor fileDescriptor = pfDescriptor.getFileDescriptor();
Bitmap bmp = BitmapFactory.decodeFileDescriptor(fileDescriptor);
pfDescriptor.close();
int height = bmp.getHeight();
int width = bmp.getWidth();
while (true) {
int i = 2;
if (width < 500 && height < 500) {
break;
} else {
if (width > 500 || height > 500) {
width = width / i;
height = height / i;
} else {
break;
}
i++;
}
}
Bitmap croppedBitmap = Bitmap.createScaledBitmap(bmp, width, height, false);
imageView.setImageBitmap(croppedBitmap);
List<Classifier.Recognition> results = classifier.recognizeImage(croppedBitmap,classfier);
String text;
for (Classifier.Recognition result : results) {
RectF location = result.getLocation();
Float conf = result.getConfidence();
String title = result.getTitle();
text += title + "\n";
}
textView.setText(text);
}
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (pfDescriptor != null) {
pfDescriptor.close();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}
}
}
Since it is long, I will explain it separately.
This code is called when you come back to the activity to determine if it came back from the gallery.
MainAcrivity.java
@Override
public void onActivityResult(int requestCode, int resultCode, Intent resultData) {
super.onActivityResult(requestCode, resultCode, resultData);
if (requestCode == RESULT_IMAGEFILE && resultCode == Activity.RESULT_OK) {
}
}
This code gets the URI from the return value and takes the file data with ParceFileDescriptor. Since you can get such a URI "content: //com.android.providers.media.documents/document/image%3A325268", I am getting the image from here.
MainAcrivity.java
if (resultData.getData() != null) {
ParcelFileDescriptor pfDescriptor = null;
try {
Uri uri = resultData.getData();
pfDescriptor = getContentResolver().openFileDescriptor(uri, "r");
if (pfDescriptor != null) {
FileDescriptor fileDescriptor = pfDescriptor.getFileDescriptor();
This code converts the image obtained earlier to a bitmap so that the size of the image is smaller than 300.
If the image is larger than 300, it cannot be judged normally and it will be dropped due to an error.
Caused by: java.lang.ArrayIndexOutOfBoundsException
Therefore, while maintaining the aspect ratio, the aspect ratio is kept within 300.
MainAcrivity.java
Bitmap bmp = BitmapFactory.decodeFileDescriptor(fileDescriptor);
pfDescriptor.close();
if (!bmp.isMutable()) {
bmp = bmp.copy(Bitmap.Config.ARGB_8888, true);
}
int height = bmp.getHeight();
int width = bmp.getWidth();
while (true) {
int i = 2;
if (width < 300 && height < 300) {
break;
} else {
if (width > 300 || height > 300) {
width = width / i;
height = height / i;
} else {
break;
}
i++;
}
}
Bitmap croppedBitmap = Bitmap.createScaledBitmap(bmp, width, height, false);
It is finally a judgment. In this code, the processed image is used for discrimination and received in its own list. Then, the list is rotated with for to get the result and display it in the textView. This time, only the determined item name is output, but you can also get the possibility that it is an item.
MainAcrivity.java
List<Classifier.Recognition> results = classifier.recognizeImage(croppedBitmap);
String text="";
for (Classifier.Recognition result : results) {
/*
RectF location = result.getLocation();
Float conf = result.getConfidence();
*/
String title = result.getTitle();
text += title + "\n";
}
textView.setText(text);
That's it! !!
Then, I would like to actually move it! First of all, the image of the dog Park bench ... nail,,, American chameleon ... Hmm The accuracy is subtle Next is an image of a beautiful landscape. The cityscape of Delft
Window screen ... Door mat ... blind,,, Hmm No!
The accuracy was subtle, but well? I was able to classify the images! Next time, I would like to classify in real time! See you soon!
Recommended Posts