I tried to touch Google Cloud Vision at Kansai Home Hack Try 2016 vol.1, so a memorandum.
The environment is from Mac, terminal.
Basically, it's Google's Getting Start. https://cloud.google.com/vision/docs/getting-started
First, enable Cloud Vision from GCP
Credentials> Create credentials> Service account key> JSON> Create Follow the procedure in. Save the JSON file downloaded here.
First, define environment variables from the terminal. ʻExport GOOGLE_APPLICATION_CREDENTIALS ='/ xxx / xxx / {JSON file earlier} .json'`
Then clone the source from github. https://github.com/GoogleCloudPlatform/cloud-vision
Move from the cloned directory into python> face_detection, Execute faces.py in it. Specify the image file you want to analyze in the argument.
python faces.py face-input.jpg
It is OK if out.jpg is output in the same layer.
See here for information that can be obtained → https://cloud.google.com/vision/reference/rest/v1/images/annotate#FaceAnnotation
Not only facial expressions, but also the position of the mouth and the coordinates of the face can be obtained on the xyz axis, and it seems that it corresponds to multiple faces, and it only hits the API, so I want to try more. Free up to 1000 units per month. If you only use face recognition, you can do it 1000 times.
Recommended Posts