Try to detect fusion movement using AnyMotion

Overview

I tried to detect the movement of fusion that everyone knows. AnyMotion is used to calculate the angle from the coordinate information of the joint and determine whether it is a fusion-like posture.

What is AnyMotion

AnyMotion is a motion analysis API platform service based on posture estimation using AI. It seems that it can be used for free because it is currently in trial.

By estimating the skeletal coordinates of a person in an image or video showing the person and calculating the angle of the specified part based on that, it is possible to visualize / quantify the body movement. ..

Written on GitHub with CLI, Python SDK, [Jupyter Notebook] Examples](https://github.com/nttpc/anymotion-examples) is available.

Aiming goal

Fusion originally consists of two warriors performing symmetrical actions at the same timing, but currently there is a restriction that AnyMotion cannot estimate the posture of two or more people at the same time. Therefore, I will analyze the operation one by one.

Fusion has three stages of movement according to the shout. fusion-right-all2.gif

  1. "Fu" The movement of turning the arms and leaning against each other
  2. "John" twisting the arm outward and the leg inward
  3. "Huh!" The action of bending the upper body inward and aligning the fingers

This time, for the sake of simplicity, we will define the postures 2 and 3 where the movement stops for a moment, and analyze whether there is any movement that applies to both. I don't care if small parts such as finger height are misaligned.

The above-mentioned motion analysis is performed by the person on the left side and the person on the right side of the fusion, and if both of them move like that regardless of the timing, the fusion is established! I will say.

Definition of fusion posture

The table below summarizes the state of the body that is considered to be fusion.

--Phase 1: "John" Twisting the arm outward and the leg inward --Phase 2: "Huh!" The action of bending the upper body inward and aligning the fingers

The person on the left (the person on the right) "John" "Huh!"
Left shoulder (right shoulder) 10〜90 130〜180
Left elbow (right elbow) 90〜180 40〜130
Right shoulder (left shoulder) 120〜200 50〜150
Right elbow (left elbow) 150〜200 100〜170
Left knee (right knee) 10〜80 -

Notes

--Due to the nature, the skeleton coordinates may not be estimated or may be inaccurate due to the shooting environment. --It takes a few minutes to estimate the skeleton coordinates (it takes time in proportion to the length and size of the video)

I wrote the code using the Python SDK

Version used

Advance preparation

--AnyMotion API token issuance -Click "Sign Up / Sign In" at the top right of the AnyMotion page to go to the portal screen. Register as a user and obtain the Client ID and Secret. --install anymotion-sdk

  $ pip install anymotion-sdk

Video file upload ~ skeleton extraction

from anymotion_sdk import Client
from PIL import Image, ImageDraw
import cv2
import matplotlib.pyplot as plt
import ffmpeg
import numpy as np

#Preparing the AnyMotion API
client = Client(client_id="CLIENT_ID", 
                client_secret="CLIENT_SECRET")
filename = "left.mp4"

#Video upload (left side)
left_filename = "fusion_left.mp4"
left_movie_id = client.upload(left_filename).movie_id
print(f"movie_id: {left_movie_id}")

#Skeleton extraction (key point extraction) (left side)
left_keypoint_id = client.extract_keypoint(movie_id=left_movie_id)
left_extraction_result = client.wait_for_extraction(left_keypoint_id)
print(f"keypoint_id: {left_keypoint_id}")

#Video upload (right side)
right_filename = "fusion_right.mp4"
right_movie_id = client.upload(right_filename).movie_id
print(f"movie_id: {right_movie_id}")

#Skeleton extraction (key point extraction) (right side)
right_keypoint_id = client.extract_keypoint(movie_id=right_movie_id)
right_extraction_result = client.wait_for_extraction(right_keypoint_id)
print(f"keypoint_id: {right_keypoint_id}")

Get the angle

Specify the part to get the angle. How to specify is described in Official Document.

#Definition of angle analysis rules
analyze_angles_rule = [
    # left arm
    {
        "analysisType": "vectorAngle",
        "points": ["rightShoulder", "leftShoulder", "leftElbow"]
    },
    {
        "analysisType": "vectorAngle",
        "points": ["leftShoulder", "leftElbow", "leftWrist"]
    },
    # right arm
    {
        "analysisType": "vectorAngle",
        "points": ["leftShoulder", "rightShoulder", "rightElbow"]
    },
    {
        "analysisType": "vectorAngle",
        "points": ["rightShoulder", "rightElbow", "rightWrist"]
    },
    # left leg
    {
        "analysisType": "vectorAngle",
        "points": ["rightHip", "leftHip", "leftKnee"]
    },
    # right leg
    {
        "analysisType": "vectorAngle",
        "points": ["leftHip", "rightHip", "rightKnee"]
    },
]
#Angle analysis started (left side)
left_analysis_id = client.analyze_keypoint(left_keypoint_id, rule=analyze_angles_rule)
#Acquisition of angle information
left_analysis_result = client.wait_for_analysis(left_analysis_id).json
#Convert dict format results to list format (convert numbers from float to int at the same time)
left_angles = [list(map(lambda v: int(v) if v else None, x["values"])) for x in left_analysis_result["result"]]
print("angles analyzed.")

#Start of angle analysis (right side)
right_analysis_id = client.analyze_keypoint(right_keypoint_id, rule=analyze_angles_rule)
right_analysis_result = client.wait_for_analysis(right_analysis_id).json
right_angles = [list(map(lambda v: int(v) if v else None, x["values"])) for x in right_analysis_result["result"]]
print("angles analyzed.")

Fusion detection

def is_fusion_phase1(pos, a, b, c, d, e, f):
    # pos: left or right
    # print(a, b, c, d, e, f)
    if pos == "left":  #Check who stands on the left
        if not e:
            e = 70  #Consider the case where the legs are not angled
        return (a in range(10, 90) and \
               b in range(90, 180) and \
               c in range(120, 200) and \
               d in range(150, 200) and \
               e in range(10, 80))
    else:  #Check who stands on the right
        if not f:
            f = 70  #Consider the case where the legs are not angled
        return (c in range(10, 90) and \
               d in range(90, 180) and \
               a in range(120, 200) and \
               b in range(150, 200) and \
               f in range(10,80))

def is_fusion_phase2(pos, a, b, c, d, e, f):
    # pos: left or right
    # print(a, b, c, d, e, f)
    if pos == "left":  #Check who stands on the left
        return a in range(130, 180) and \
               b in range(40, 130) and \
               c in range(50, 150) and \
               d in range(100, 170)
    else:
        return c in range(130, 180) and \
               d in range(40, 130) and \
               a in range(50, 150) and \
               b in range(100, 170)

def check_fusion(angles, position):
    """
        angles:Angle information
        position: left or right
    """
    #A flag that stores whether each step was detected
    phase1 = False
    phase2 = False
    #List to store the corresponding frame
    p1 = []
    p2 = []
    for i in range(len(angles[0])):
        if is_fusion_phase1(position, angles[0][i], angles[1][i], angles[2][i], angles[3][i],
                            angles[4][i], angles[5][i]):
            print(i, "Phase1!!!")
            phase1 = True
            p1.append(i)
        elif phase1 and is_fusion_phase2(position, angles[0][i], angles[1][i], angles[2][i], angles[3][i],
                              angles[4][i], angles[5][i]):
            print(i, "Phase2!!!")
            phase2 = True
            p2.append(i)

    if phase1 and phase2:
        print("Fusion!!!!!!")
        
    return ((phase1 and phase2), p1, p2)

left_result, left_p1, left_p2 = check_fusion(left_angles, "left")
right_result, right_p1, right_p2 = check_fusion(right_angles, "right")

Generate a GIF animation using the detected fusion frame

#Check the orientation of the video
def check_rotation(path_video_file):
    meta_dict = ffmpeg.probe(path_video_file)

    rotateCode = None
    try:
        if int(meta_dict['streams'][0]['tags']['rotate']) == 90:
            rotateCode = cv2.ROTATE_90_CLOCKWISE
        elif int(meta_dict['streams'][0]['tags']['rotate']) == 180:
            rotateCode = cv2.ROTATE_180
        elif int(meta_dict['streams'][0]['tags']['rotate']) == 270:
            rotateCode = cv2.ROTATE_90_COUNTERCLOCKWISE
    except:
        pass

    return rotateCode

#Get the specified frame of the video
def get_frame_img(filename, frame_num):
    reader = cv2.VideoCapture(filename)
    rotateCode = check_rotation(filename)
    reader.set(1, frame_num)
    ret, frame_img = reader.read()
    reader.release()
    
    if not ret:
        return None
    if rotateCode:
        frame_img = cv2.rotate(frame_img, rotateCode)

    return frame_img

#Connect two frames horizontally
def get_frame_img_hconcat(l_filename, r_filename, l_framenum, r_framenum):
    l_img = get_frame_img(l_filename, l_framenum)
    r_img = get_frame_img(r_filename, r_framenum)
    
    img = cv2.hconcat([l_img, r_img])
    return img

#Get the median of detected frames
left_p1_center = left_p1[int(len(left_p1)/2)]
left_p2_center = left_p2[int(len(left_p2)/2)]
right_p1_center = right_p1[int(len(right_p1)/2)]
right_p2_center = right_p2[int(len(right_p2)/2)]

#Combine Phase 1 images horizontally
p1_img = get_frame_img_hconcat(left_filename, right_filename, left_p1_center, right_p1_center)

#Combine Phase 2 images horizontally
p2_img = get_frame_img_hconcat(left_filename, right_filename, left_p2_center, right_p2_center)

#Convert from numpy array to PIL Image
im1 = Image.fromarray(cv2.cvtColor(p1_img, cv2.COLOR_BGR2RGB))
im2 = Image.fromarray(cv2.cvtColor(p2_img, cv2.COLOR_BGR2RGB))

#Generate GIF animation
im1.save('fusion.gif', save_all=True, append_images=[im2], optimize=False, duration=700, loop=0)

fusion.gif

(I think there are many things I want to say, such as my hands getting covered with things and my fingers being out of position, but please take a good look ...)

Whole source code

I uploaded it to gist in Jupyter Notebook format.

in conclusion

I tried to detect the attitude of fusion using the attitude information estimated using AnyMotion. By doing this kind of thing, it seems that you can also do something like a personal trainer, such as checking the form of muscle training (Ietore). I would like to try various other things that I can do.

reference

-I tried shooting Kamehameha using OpenPose

Recommended Posts

Try to detect fusion movement using AnyMotion
[Machine learning] Try to detect objects using Selective Search
Try using pynag to configure Nagios
Try to get statistics using e-Stat
Try to operate Excel using Python (Xlwings)
Try to download Youtube videos using Pytube
Try using Tkinter
Try using django-import-export to add csv data to django
Try using docker-py
Try to separate Controllers using Blueprint in Flask
Try using cookiecutter
Try using PDFMiner
Try using geopandas
Try using Selenium
Try using scipy
Try using pandas.DataFrame
Try to create an HTTP server using Node.js
Try using django-swiftbrowser
Try using matplotlib
Try using tf.metrics
Try using PyODE
Try to simulate the movement of the solar system
Try to detect fish with python + OpenCV2.4 (unfinished)
(Python) Try to develop a web application using Django
Try to make RESTful API with MVC using Flask 1.0.2
Try to delete tweets in bulk using Twitter API
Try to extract high frequency words using NLTK (python)
Try to solve Sudoku at explosive speed using numpy
Try using virtualenv (virtualenvwrapper)
[Azure] Try using Azure Functions
Try to implement yolact
Try using virtualenv now
Try using W & B
Try using Django templates.html
[Kaggle] Try using LGBM
Try using Python's feedparser.
Try using Python's Tkinter
Try using Tweepy [Python2.7]
Try using Pytorch's collate_fn
Try to make it using GUI and PyQt in Python
Try to make PC setting change software using TKinter (beginner)
Try to operate an Excel file using Python (Pandas / XlsxWriter) ①
Try to operate an Excel file using Python (Pandas / XlsxWriter) ②
Try to determine food photos using Google Cloud Vision API
Try to model a multimodal distribution using the EM algorithm
Try to implement linear regression using Pytorch with Google Colaboratory