This article is the 16th day article of Python-AdventCalandar-2016.
Hello. Do you play Python? Do you recognize your face? Are you serverless?
This article is for those who want to easily set up a face recognition API server in Python. I was thinking of writing an academic story on the Advent calendar, but I chose this subject because the timing was good.
First of all, I will declare it, but this time I will make full use of AWS.
It's very easy to do with OpenCV, but this time we'll make full use of AWS! So, I will use the service "Amazon Rekognition" announced at this year's re: invent.
In a word, it is a great service that can recognize and search objects and faces with high accuracy (miscellaneous).
For more information, please visit https://aws.amazon.com/jp/rekognition/.
The price list is as follows. Please note that the more you use it, the more it will cost you.
https://aws.amazon.com/jp/rekognition/pricing/
The serverless architecture on AWS is "API Gateway + Lambda".
This time, we will manage them using a library called "Chalice" made by Python.
https://github.com/awslabs/chalice
Try "hello world" for the time being using the chalice command.
$ pip install chalice
#Please use an appropriate project name for the freko part.
$ chalice new-project freko && cd freko
$ cat app.py
from chalice import Chalice
app = Chalice(app_name="helloworld")
@app.route("/")
def index():
return {"hello": "world"}
$ chalice deploy
...
Your application is available at: https://endpoint/dev
$ curl https://endpoint/dev
{"hello": "world"}
Once you've done that, all you have to do is hit the S3 and Rekognition APIs.
I'm sure many people have already created it, but first use ʻaws-cli` to export the settings.
In this sample, we chose ʻeu-west-1` as the region. I think it can be in any region that Rekognition supports.
$ pip install awscli
$ aws configure
https://github.com/aws/aws-cli
Also install the AWS-SDK made by Python called boto3.
$ pip install boto3
https://github.com/boto/boto3
Since it is troublesome to do it with GUI, I will make it all with API.
REGION = 'eu-west-1'
BUCKET = 'freko-default'
S3 = boto3.resource('s3')
#Create a bucket in S3 if the specified bucket name does not exist
def create_s3_bucket_if_not_exists():
exists = True
try:
S3.meta.client.head_bucket(Bucket=BUCKET)
except botocore.exceptions.ClientError as ex:
error_code = int(ex.response['Error']['Code'])
if error_code == 404:
exists = False
if exists:
return
else:
try:
S3.create_bucket(Bucket=BUCKET, CreateBucketConfiguration={
'LocationConstraint': REGION})
except Exception as ex:
raise ChaliceViewError("fail to create bucket s3. error = " + ex.message)
return
#Upload the file to S3
def upload_file_s3_bucket(obj_name, image_file_name):
try:
s3_object = S3.Object(BUCKET, obj_name)
s3_object.upload_file(image_file_name)
except Exception as ex:
raise ChaliceViewError("fail to upload file s3. error = " + ex.message)
REKOGNITION = boto3.client('rekognition')
#Face recognition by specifying a file in the S3 bucket
def detect_faces(name):
try:
response = REKOGNITION.detect_faces(
Image={
'S3Object': {
'Bucket': BUCKET,
'Name': name,
}
},
Attributes=[
'DEFAULT',
]
)
return response
except Exception as ex:
raise ChaliceViewError("fail to detect faces. error = " + ex.message)
You can get more information by specifying "ALL" for Attributes.
Chalice examines the API used in the code at the time of deployment and sets the policy without permission, but since it is still a preview version, it seems that it does not support all APIs. It didn't read ʻupload_file` etc. used when uploading the file to S3. ..
Add S3 and ReKognition to the Statement in policy.json. (Because it is for testing, I make it full)
$ vim .chalice/policy.json
"Statement": [
{
"Action": [
"s3:*"
],
"Resource": "*",
"Effect": "Allow"
},
{
"Action": [
"rekognition:*"
],
"Resource": "*",
"Effect": "Allow"
}
...
]
The command to deploy without automatically generating the policy is: Make an appropriate Makefile and register it.
$ chalice deploy --no-autogen-policy
I'm afraid that I can hit the endpoint as much as I want, so I will put in authentication by API key and request restriction for the time being.
For more information, please visit http://docs.aws.amazon.com/ja_jp/apigateway/latest/developerguide/how-to-api-keys.html.
In Chalice, if you set ʻapi_key_required = True`, you will have an API that requires API key authentication.
@app.route('/face', methods=['POST'], content_types=['application/json'], api_key_required=True)
I will use Lena for the test.
For API parameters, enter name = filename, base64 = base64-encoded string of images.
Please set the API key and URL by yourself.
$ (echo -n '{"name":"lenna.jpg ", "base64": "'; base64 lenna.jpg; echo '"}') | curl -H "x-api-key:your-api-key" -H "Content-Type:application/json" -d @- https://your-place.execute-api.eu-west-1.amazonaws.com/dev/face | jq
If you just want to determine if the image has a face, it's enough to see if the FaceDetails has a value and the && Confidence value is high.
{
"exists": true,
"response": {
"FaceDetails": [
{
"BoundingBox": {
"Width": 0.4585798680782318,
"Top": 0.3210059106349945,
"Left": 0.34467455744743347,
"Height": 0.4585798680782318
},
"Landmarks": [
{
"Y": 0.501218318939209,
"X": 0.5236561894416809,
"Type": "eyeLeft"
},
{
"Y": 0.50351482629776,
"X": 0.6624458432197571,
"Type": "eyeRight"
},
{
"Y": 0.5982820391654968,
"X": 0.6305037140846252,
"Type": "nose"
},
{
"Y": 0.6746630072593689,
"X": 0.521257758140564,
"Type": "mouthLeft"
},
{
"Y": 0.6727028489112854,
"X": 0.6275562644004822,
"Type": "mouthRight"
}
],
"Pose": {
"Yaw": 30.472450256347656,
"Roll": -1.429526448249817,
"Pitch": -5.346992015838623
},
"Quality": {
"Sharpness": 160,
"Brightness": 36.45581817626953
},
"Confidence": 99.94509887695312
}
],
"ResponseMetadata": {
...
},
"OrientationCorrection": "ROTATE_0"
}
}
By the way, if the face is not recognized, the response will be as follows.
{
"exists": false,
"response": {
"FaceDetails": [],
"ResponseMetadata": {
...
}
}
}
It has become an article like how to use AWS-SDK ... Without reflecting on it, I would like to do something cool like calling Rekognition next time when it is put to S3.
The full code is below. I'm glad if you can use it as a reference.
https://github.com/gotokatsuya/freko
Recommended Posts