[Python] Evaluate the facial expressions that appear on the face

You want to be kind to others at work. Enduring the anger of why I have to do it, I'm dealing with annoying projects without complaining. Let's eat after lunch, huh ~ I'm sick for the time being! A junior who was aiming for the moment when he stretched out a little and then leaned a little at the desk and was about to have lunch, brought a new annoying project. Isn't that feeling on your face? Are you a junior freak? Incidents are no longer reported due to fear, and even more serious incidents occur. .. .. I don't think that's the case, but do you have a scary face before you realize it at work? Or is it okay to smile today? Use image recognition technology to evaluate objectively.

The face is photographed using the built-in camera of the personal computer or the webcam connected by USB, and evaluated by anger, nausea, fear, happiness, sadness, surprise, and neutrality. You can set the shooting interval and shooting time yourself, and the results will be graphed and saved in Excel. The contents use tensorflow and facial expression trained model. I can't use python! !! I wanted to give it to someone who said, but I couldn't make it into an exe even if I used pyinstaller. Is it because there are heavy packages such as tensorflow and keras? Please tell me who can do it.

hyoujyou.jpg

(Initially designed for evaluation by veterinarians when explaining to patients. Change the name in the code accordingly.)

  1. Create an environment. win launches Anaconda's command prompt. Mac launches the terminal. If you type in the following, the execution environment will be ready. Download the required package. In order to use tensorflow, you have to drop the python version. You can't use tensorflow in 3.8. Let's make it 3.6! !!

Win: At the anaconda prompt, do the following in order. conda create -n face python=3.6.10 proceed ([y]/n)? → y activate face pip install opencv-python pip install pandas pip install matplotlib pip install openpyxl pip install tensorflow pip install keras

Mac: Copy and paste the following in order in Terminal and press Enter one by one. conda create -n face python=3.6.10 anaconda proceed ([y]/n)? → y conda activate face pip install opencv-python pip install pandas pip install matplotlib pip install openpyxl pip install tensorflow pip install keras

  1. Use the facial expression recognition model created by Octavio Arriaga, which is deployed on github. I'm grateful that such a wonderful thing is deployed on github for free. Download the zip, remove the file called fer2013_mini_XCEPTION.110-0.65.hdf5 from there, please put somewhere in the folder for use from now on. https://github.com/oarriaga/face_classification/tree/master/trained_models/emotion_models

  2. Create the following python file in the folder where you saved the above file.

face.py



from tkinter import *
from tkinter import ttk
import tkinter
import cv2
import os
import time
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
import numpy as np
from keras.models import load_model
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing import image
import glob
import openpyxl
import datetime
from tkinter import filedialog


os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' # not to show an error message related to CPU

def _destroyWindow():  # function to close the tinker window
    root.quit()
    root.destroy()
    
def askfileplace():
    cd =  filedialog.askdirectory() # function to ask which directly to use for saving image and excel    
    path.set(cd)

def resource_path(relative_path):   # function to get the target path
    try:
        base_path = sys._MEIPASS
    except Exception:
        base_path = os.path.dirname(__file__)
    return os.path.join(base_path, relative_path)
    
# create a main function which is to be used after pressing a button
def save_frame_camera_key(device_num,interval1,duration1,vetname1,name1,path1):

    os.makedirs("{}//{}_{}".format(path1,vetname1,name1)) # create a folder to save above
    cap = cv2.VideoCapture(device_num)                    # create a videocapture object    

    if not cap.isOpened():  # check if it has been opened
        return

    n = 0 # increment initiated with 0
    while n <= int(duration1)/int(interval1) :  # calculation of the terminating point of the increment 

        ret, frame = cap.read() # read the object which returens two variables
        cv2.imwrite("{}//{}_{}//{}.jpg ".format(path1,vetname1,name1,n), frame) # save the pictures
        time.sleep(int(interval1)) # wait for interval
        n += 1 # increment

    image_paths = glob.glob("{}//{}_{}//*.jpg ".format(path1,vetname1,name1)) # get the jpg files in the folder
    
    result=[] # create a list of the results

    for image_path in image_paths: # process each file in the jpg files
        img = image.load_img(image_path, color_mode="grayscale" , target_size=(64, 64)) # convert it to grayscale
        img_array = image.img_to_array(img) # convert it to array
        pImg = np.expand_dims(img_array, axis=0) / 255 # make it between 0 ~ 1
        
        # read the face recognision trained model hdf5
        model_path = resource_path(".//fer2013_mini_XCEPTION.110-0.65.hdf5")
        emotions_XCEPTION = load_model(model_path, compile=False)
        
        prediction = emotions_XCEPTION.predict(pImg)[0] # predict a image using the model
        result.append(prediction) # append the result to the list named result
    
    df = pd.DataFrame(result) # convert it to DataFrame
    df.columns=['angry','disgust','fear','happy','sad','surprise','neutral'] # name the columns
    df = pd.concat([df,pd.DataFrame(df.mean(axis=0),columns=['Mean']).T]) # create a mean row on the bottom
    df.to_excel("{}//{}_{}//result.xlsx".format(path1,vetname1,name1), sheet_name='result') # save a excel file
    
    # create a graph using mtplotlib
    fig = plt.figure() # create object
    ax=fig.add_subplot(1,1,1) # create subplot indicating the place of the subplot
    mycolor = ["#ff0033", "#cc99ff", "#ffff66", "#ff9999", "#6699ff", "#ff9966", "#99ffcc"] # create a list for color
    ax.pie(df.tail(1).values[0],labels=df.columns,counterclock=True, startangle=90, autopct="%1.1f%%",colors=mycolor) # create piechart
    centre_circle = plt.Circle((0,0),0.7,color='white', fc='white',linewidth=1.25) # make it donuts shape
    fig = plt.gcf() #get current figure
    fig.gca().add_artist(centre_circle) # get the current axes and cover the white donuts on the pie chart

    # Create new window by Tkinter Class
    root = tkinter.Tk()
    root.title("Facial expression analysis result")
    root.withdraw()
    root.protocol('WM_DELETE_WINDOW', _destroyWindow)  # When you close the tkinter window.

    # Canvas and put the figure to the canvas
    canvas = FigureCanvasTkAgg(fig, master=root)  # Generate canvas instance, Embedding fig in root
    canvas.draw()
    canvas.get_tk_widget().pack() #canvas._tkcanvas.pack()
    root.update()
    root.deiconify()  

# create interface
root = Tk()
root.title('Facial expression evaluation tool')
root.resizable(True, True)

#create text
frame1 = ttk.Frame(root, padding=(32))
frame1.grid()

label1 = ttk.Label(frame1, text='Shooting interval(Seconds)', padding=(5, 2))
label1.grid(row=0, column=0, sticky=E)

label2 = ttk.Label(frame1, text='Shooting time(Seconds)', padding=(5, 2))
label2.grid(row=1, column=0, sticky=E)

label3 = ttk.Label(frame1, text='Veterinarian name (alphanumeric characters only)', padding=(5, 2))
label3.grid(row=2, column=0, sticky=E)

label4 = ttk.Label(frame1, text='Patient name (alphanumeric characters only)', padding=(5, 2))
label4.grid(row=3, column=0, sticky=E)

label5 = ttk.Label(frame1, text='Where to save photos and results', padding=(5, 2))
label5.grid(row=4, column=0, sticky=E)

# create textboxes
interval = StringVar()
interval_entry = ttk.Entry(
    frame1,
    textvariable=interval,
    width=20)
interval_entry.insert(0,"2")
interval_entry.grid(row=0, column=1)

duration = StringVar()
duration_entry = ttk.Entry(
    frame1,
    textvariable=duration,
    width=20)
duration_entry.insert(0,"10")
duration_entry.grid(row=1, column=1)

vetname = StringVar()
vetname_entry = ttk.Entry(
    frame1,
    textvariable=vetname,
    width=20)
vetname_entry.insert(0,"vet")
vetname_entry.grid(row=2, column=1)

name = StringVar()
name_entry = ttk.Entry(
    frame1,
    textvariable=name,
    width=20)
name_entry.insert(0,datetime.datetime.now().strftime("%Y%m%d_%H%M")) # time
name_entry.grid(row=3, column=1)

path =StringVar()
path_entry = ttk.Entry(
    frame1,
    textvariable=path,
    width=20)
path_entry.grid(row=4, column=1)

path_button = ttk.Button(frame1,text="Folder selection",command= lambda : [askfileplace()] )
path_button.grid(row=4, column=2)

# create bottens
frame2 = ttk.Frame(frame1, padding=(0, 5))
frame2.grid(row=5, column=1, sticky=W)

button1 = ttk.Button(
    frame2, text='Start analysis',
    command= lambda : [save_frame_camera_key(0,interval.get(),duration.get(),vetname.get(),name.get(),path.get())]) # read the function I made
button1.pack(side=LEFT)

button2 = ttk.Button(frame2, text='End', command=quit)
button2.pack(side=LEFT)


root.mainloop()
  1. At the Anaconda command prompt, do the following: Get the path to the python file. If it is Win, shift + right click, right click mac, then option. python The path obtained above (Win input example: python "C: \ Users \ Yamada \ Desktop \ FaceEmotionAnalysis \ face.py") (Mac input example: python /Users/Sato/Desktop/FaceEmotionAnalysis/face.py)

If you like, please take a look at your own website. Other examples (cat activity meter AI) are also posted. https://meknowledge.jpn.org/

Recommended Posts

[Python] Evaluate the facial expressions that appear on the face
Memorize the Python commentary on YouTube.
Python standard module that can be used on the command line
Sakura Use Python on the Internet
Estimate the probability that a coin will appear on the table using MCMC
[python] Move files that meet the conditions
Python3 + pyperclip that rewrites the copied text
Download files on the web with Python
[Python] A progress bar on the terminal
[Python] A program that rounds the score
[Python] A program that calculates the difference between the total numbers on the diagonal line.
Looking back on the history of expressions that return sum of square to Pythonic
Draw contour lines that appear in textbooks (Python)
Python: Try using the UI on Pythonista 3 on iPad
Try CIing the pushed python code on GitHub.
Building multiple Python environments on the same system
Introduction to Python with Atom (on the way)
Sound the buzzer using python on Raspberry Pi 3!
At the time of python update on ubuntu
The one that displays the progress bar in Python
Formulas that appear in Doing Math with Python
A story that got stuck when trying to upgrade the Python version on GCE
A note on the library implementation that explores hyperparameters using Bayesian optimization in Python
Summary of versions of the Python standard library that are now server validated on https
[Python] A notebook that translates and downloads the ipynb file on GitHub into Japanese.