Microsoft Cognitive Services It seems that you can try it, so I will use it immediately.
-Use Microsoft Cognitive Services --Register to get the Face API subscription key --Use Python 3.x for use with Raspberry Pi --Use photos taken in advance
I made it a function and tried to call it. For keyFaceapi, set the subscription key you registered yourself.
-*- encoding:utf-8 -*-
----------------------------------------------------------------------
# ■ Initial setting
----------------------------------------------------------------------
# Library import
------------------------------
import requests
import urllib
import json
# Bing Face API settings
------------------------------
imgFaceapi = 'faceimage.jpg'
urlFaceapi = 'https://api.projectoxford.ai/face/v1.0/detect'
keyFaceapi = 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
retFaceapi = 'age,gender,headPose,smile,facialHair,glasses,emotion,hair,makeup,occlusion,accessories,blur,exposure,noise'
----------------------------------------------------------------------
# ■ Image analysis (Bing Face API) for python 3.x
----------------------------------------------------------------------
def useFaceapi(url, key, ret, image):
#Server query
# ------------------------------
headers = {
'Content-Type': 'application/octet-stream',
'Ocp-Apim-Subscription-Key': key,
'cache-control': 'no-cache',
}
params = {
'returnFaceId': 'true',
'returnFaceLandmarks': 'false',
'returnFaceAttributes': ret,
}
data = open(image, 'rb').read()
try:
jsnResponse = requests.post(url ,headers=headers, params=params, data=data)
if(jsnResponse.status_code != 200):
jsnResponse = []
else:
jsnResponse = jsnResponse.json()
except requests.exceptions.RequestException as e:
jsnResponse = []
# Return value
# ------------------------------
return jsnResponse
----------------------------------------------------------------------
# ■ Analysis execution
----------------------------------------------------------------------
# Use the Bing Face API
------------------------------
resFaceapi = useFaceapi(urlFaceapi, keyFaceapi, retFaceapi, imgFaceapi)
print(resFaceapi)
Put your face photo "faceimage.jpg " in the same folder as the executable file and execute it
python3 pyFaceapi.py
Below, the analysis results
[
{
'faceAttributes': {
'blur': {
'value': 0.29,
'blurLevel':
'medium'
},
'smile': 0.0,
'headPose': {
'roll': -2.5,
'pitch': 0.0,
'yaw': -15.3
},
'hair': {
'invisible': False,
'hairColor': [
{
'color': 'black',
'confidence': 1.0
},
{
'color': 'brown',
'confidence': 0.98
},
{
'color': 'other',
'confidence': 0.17
},
{
'color': 'red',
'confidence': 0.12
},
{
'color': 'gray',
'confidence': 0.05
},
{
'color': 'blond',
'confidence': 0.03
}
],
'bald': 0.01
},
'age': 31.7,
'emotion': {
'anger': 0.002,
'surprise': 0.0,
'contempt': 0.049,
'neutral': 0.853,
'disgust': 0.002,
'happiness': 0.0,
'sadness': 0.094,
'fear': 0.0
},
'gender': 'male',
'occlusion': {
'eyeOccluded': False,
'foreheadOccluded': False,
'mouthOccluded': False
},
'noise': {
'value': 0.11,
'noiseLevel': 'low'
},
'facialHair': {
'beard': 0.0,
'moustache': 0.0,
'sideburns': 0.1
},
'exposure': {
'value': 0.43,
'exposureLevel': 'goodExposure'
},
'makeup': {
'lipMakeup': True,
'eyeMakeup': False
},
'glasses': 'NoGlasses',
'accessories': []
},
'faceId': 'dafdf8f1-c910-45ee-aef3-2247b446ea1d',
'faceRectangle': {
'height': 98,
'width': 98,
'top': 248,
'left': 380
}
}
]
Check the result.
--Gender is male (male) is correct! --No glasses (NoGlasses) Correctly judged! --The age was determined to be 31.7 years old. My age is late 30s. .. .. ――There is lipMakeup (True) ... I don't! !!
I tried it with the cooperation of other people, but it seems that the age is judged to be young overall If the photo shows two or more people, you can analyze the number of people.
Raspberry Pi is fun! I feel like adult electronic work (laughs) In addition to the Face API introduced this time, various APIs (Google, IBM, etc. other than Microsoft) are open to the public, so I would like to try them!
Recommended Posts