In the previous article, I even covered deploying Python's facial recognition model on Heroku.
Deploy Python face recognition model to Heroku and use it from Flutter ①
This time, I would like to introduce how to call the model from a mobile application and actually realize face comparison. I used Flutter to create a sample mobile app.
We are tweeting about application development utilizing face recognition on Twitter. https://twitter.com/studiothere2
The diary of application development is serialized in note. https://note.com/there2
It is a simple application that judges whether two images are the same person by selecting two images from the gallery and pressing the compare button at the bottom right. Get the embedding of each image from the web service created in the previous article and calculate the L2 norm between the embeddings. If the L2 norm is 0.6 or less, it is judged to be the same person, and if it is more than that, it is judged to be another person.
For example, if you select a different person as shown below, the L2 norm will be 0.6 or more and it will be judged as a different person.
If the L2 norm is 0.5 or less, there is a high possibility that they are the same person, and the threshold for judging whether they are the same person with an accuracy of about 99% is 0.6.
pubsec.yaml
dependencies:
flutter:
sdk: flutter
image_picker: ^0.6.6+1
ml_linalg: ^12.7.1
http: ^0.12.1
http_parser: ^3.1.4
-Get an image from the gallery with ʻimage_picker. This is a great way to get images from both the gallery and the camera. --The L2 norm is calculated with
ml_linalg. This is a Dart library that can perform Vector operations. --You are calling a web service using
http,
http_parser`.
First is the import part. Importing the required libraries.
main.dart
import 'dart:convert';
import 'dart:typed_data';
import 'package:flutter/material.dart';
import 'package:image_picker/image_picker.dart';
import 'package:ml_linalg/linalg.dart';
import 'package:http/http.dart' as http;
import 'package:http_parser/http_parser.dart';
import './secret.dart';
secret.dart
holds information for accessing WEB services. This is not in git
, so please read as appropriate.
main.dart
void main() => runApp(MyApp());`
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
So far, nothing has been changed with the defaults created for the newly created project. The following are the main classes.
main.dart
class _MyHomePageState extends State<MyHomePage> {
///Image for comparison 1
Uint8List _cmpImage1;
///Image for comparison 2
Uint8List _cmpImage2;
//Euclidean distance between two faces.
double _distance = 0;
void initState() {
super.initState();
}
We declare a member that holds the byte data (ʻUint8List) of the two images to be compared as member variables, and a
_distance` variable that holds the L2 norm (Euclidean distance) of the embedding of the two images.
main.dart
Future<Uint8List> _readImage() async {
var imageFile = await ImagePicker.pickImage(source: ImageSource.gallery);
return imageFile.readAsBytesSync();
}
Use the ʻImagePickerlibrary to get images from the gallery. The return value will be of type
File, so convert it to ʻUint8List
format using thereadAsBytesSync ()
method.
ʻUint8List is an array of ʻint
type and can be handled as byte data.
main.dart
///Returns similarity depending on the Euclidean distance between the two images
String _getCompareResultString() {
if (_distance == 0) {
return "";
} else if (_distance < 0) {
return "processing....";
} else if (_distance < 0.6) {
return "Same person";
} else {
return "Another person";
}
}
This function returns the image comparison result in text according to the value of _distance
in the L2 norm.
-1 is processing and returns the text as the same person if it is 0.6 or less, and as a different person if it is more than 0.6. Call this from the Widget and display it on the screen.
It's a little long, so I'll divide it and look at it in order.
main.dart
void uploadFile() async {
setState(() {
_distance = -1;
});
var response;
var postUri = Uri.parse(yourRemoteUrl);
var request1 = http.MultipartRequest("POST", postUri);
First, set _distance
to -1 so that the screen shows that it is being processed.
Please read postUri
separately.
We are preparing an http request here.
main.dart
//First file
debugPrint("start: " + DateTime.now().toIso8601String());
request1.files.add(http.MultipartFile.fromBytes('file', _cmpImage1.toList(),
filename: "upload.jpeg ", contentType: MediaType('image', 'jpeg')));
response = await request1.send();
if (response.statusCode == 200) print("Uploaded1!");
The image obtained from the gallery is set in the http request and the request is sent. If the return value is 200
, it is successful.
main.dart
var featureString1 = await response.stream.bytesToString();
List<double> embeddings1 =
(jsonDecode(featureString1) as List<dynamic>).cast<double>();
debugPrint("end: " + DateTime.now().toIso8601String());
The return value of the web service is converted from a byte array to a string and obtained (featureString1
), jsonDecode
is cast to double
, and as a result, it is acquired as a double
type array. ..
This is the embedding of the image, and you can judge whether they are the same person by comparing them.
main.dart
//Second file
var request2 = http.MultipartRequest("POST", postUri);
request2.files.add(http.MultipartFile.fromBytes('file', _cmpImage2.toList(),
filename: "upload.jpeg ", contentType: MediaType('image', 'jpeg')));
response = await request2.send();
if (response.statusCode == 200) print("Uploaded2!");
var featureString2 = await response.stream.bytesToString();
List<double> embeddings2 =
(jsonDecode(featureString2) as List<dynamic>).cast<double>();
So far I have done the same for the second image. Next is the calculation part of the L2 norm.
main.dart
var distance = Vector.fromList(embeddings1)
.distanceTo(Vector.fromList(embeddings2), distance: Distance.euclidean);
setState(() {
_distance = distance;
});
}
It's very easy because it uses the ml_linalg
library.
All you have to do is convert the embedding of each double
type array to Vector
with Vector.fromList
and find the distance with distanceTo
.
The Euclidean distance (L2 norm) is specified as the calculation method of distance
.
Finally, set the distance to the member variable _distance
and you're done.
main.dart
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text(widget.title),
),
body: Column(
mainAxisAlignment: MainAxisAlignment.start,
children: <Widget>[
Row(
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
Expanded(
flex: 1,
child: Column(
mainAxisAlignment: MainAxisAlignment.start,
children: <Widget>[
RaisedButton(
onPressed: () async {
var cmpImage = await _readImage();
setState(() {
_cmpImage1 = cmpImage;
});
},
child: Text("Loading the first image"),
),
Text("First image"),
Container(
child:
_cmpImage1 == null ? null : Image.memory(_cmpImage1),
),
],
),
),
Expanded(
flex: 1,
child: Column(
children: <Widget>[
RaisedButton(
onPressed: () async {
var cmpImage = await _readImage();
setState(() {
_cmpImage2 = cmpImage;
});
},
child: Text("Loading the second image"),
),
Text("Second image"),
Container(
child:
_cmpImage2 == null ? null : Image.memory(_cmpImage2),
),
],
),
)
],
),
SizedBox(height: 10),
Text("Face similarity comparison results"),
Text(_getCompareResultString()),
Text("The L2 norm is$_distance"),
],
),
floatingActionButton: FloatingActionButton(
onPressed: uploadFile,
tooltip: 'Image comparison',
child: Icon(Icons.compare),
),
);
}
It's a little long but simple.
Load the image with the load button of each of the two images.
The data of ʻUint8List of the loaded image can be displayed on the screen with ʻImage.memory
.
The comparison result of the image is displayed in Japanese with _getCompareResultString ()
.
The process of calculating the distance of the image feeling is called by calling the WEB service with ʻonPressed of
FloatingActionButton`.
If it is a face image that is properly reflected, it will judge whether it is the same person, so it is quite impressive. Recently, it seems that some models can recognize faces even if they are masked. Using face recognition has the problem of privacy invasion, so you need to be careful about how to use it, but it would be fun if you could develop a service that makes good use of it.
Recommended Posts