I became horror when I tried to detect the features of anime faces using PCA and NMF.

Machine learning is image processing. By using classical analysis methods such as PCA (principal component analysis) and NMF (non-negative matrix factor analysis), it is possible to extract major features from a large number of face images, so I immediately tried targeting animated faces. .. The target is 21,551 types of face data as shown below. I borrowed it from https://www.kaggle.com/soumikrakshit/anime-faces. Thank you very much.

image.png

It would be interesting if the components that determine the contour and the components that determine the hair could be visually decomposed by analysis.

PCA

import cv2
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA

sample = 20000

X = []

for i in range(sample):
    file_name = str(i+1) + '.png'
    img = cv2.imread(file_name)
    img = img.reshape(12288)/255
    X.append(img)

pca = PCA(n_components=9, whiten=True)
pca.fit(X)
X_pca = pca.transform(X)

fig, axes = plt.subplots(3,3,figsize=(10,10))
for i,(component, ax) in enumerate(zip(pca.components_,axes.ravel())):
    ax.imshow(0.5-component.reshape((64,64,3))*10)
    ax.set_title('PC'+str(i+1))

print(pca.explained_variance_ratio_)

plt.show()

Click here for the analysis results.

image.png

It's horror! It looks like a grudge face bleeding on the wall!

It should be a bundle of variables (coordinates and pixels in this case) so as to explain the main parts of the image in order from PC1 ...

--PC1 overall brightness --PC2 Overall volume of hair? --PC3 Is it facing left?

If you overdo it **, you can read it, but it's not a clear feature. Even if you look at the explanation rate by PC, it seems that it is not so aggregated. Sorry.

print(pca.explained_variance_ratio_)

Output


[0.21259875 0.06924239 0.03746094 0.03456278 0.02741101 0.01864574
 0.01643447 0.01489064 0.0133781 ]

NMF

import cv2
import numpy as np
import matplotlib.pyplot as plt
from sklearn.decomposition import NMF

sample = 20000

X = []

for i in range(sample):
    file_name = str(i+1) + '.png'
    img = cv2.imread(file_name)
    img = img.reshape(12288)/255
    X.append(img)

nmf = NMF(n_components=9)
nmf.fit(X)
#X_nmf = nmf.transform(X)

fig, axes = plt.subplots(3,3,figsize=(10,10))
for i,(component, ax) in enumerate(zip(nmf.components_,axes.ravel())):
    ax.imshow(component.reshape((64,64,3)))
    ax.set_title('component'+str(i+1))
    
plt.show()

Click here for the analysis results.

image.png

It's horror! Become a negative of the ghost reflected in the film! I don't feel like interpreting it anymore.

Kill Me Baby

I will try to analyze at Kill Me Baby following my predecessor.

PCA image.png

NMF image.png

amen.

Conclusion

After all, it seems difficult to extract the features of an image only by a linear method (and without a teacher). I will study because it is desirable to proceed with further analysis using advanced methods such as GAN.

Code description

Somehow it has become shorter, so I will explain the code. About loading and unfolding images. If you put the image file in the same location as the python file to be executed, you can access it just by specifying the file name. You can specify the file name with a simple iteration by naming it as 1.jpg, 2.jpg. The more general code would get the filename with ʻos.listdir`, but this time it was easier.

The read files are arrayed using the function ʻimread in the cv2module. However, since this array is a three-dimensional array of vertical x horizontal x RGB, usereshapeto make one long vector for later analysis. This time, it is 64 vertical, 64 horizontal, and RGB, so it is64 * 64 * 3 = 12288. Also, if the array value is raw data, the RGB value is 0 to 255, so divide it by 255 to put it between 0 and 1`.

    file_name = str(i+1) + '.png'
    img = cv2.imread(file_name)
    img = img.reshape(12288)/255

The above is the code of the above contents.

The components obtained by the analysis are stored in components_ in both PCA and NMF. References are at https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html and https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html See ʻAttributes:. Display this on the graph with a function called ʻimshow in the pyplot module. Contrary to the previous one, it is necessary to reshape the analysis result (one long vector) into vertical x horizontal x RGB, so we will reshape it again.

fig, axes = plt.subplots(3,3,figsize=(10,10))
for i,(component, ax) in enumerate(zip(pca.components_,axes.ravel())):
    ax.imshow(0.5-component.reshape((64,64,3))*10)
    ax.set_title('PC'+str(i+1))

The above is the code of the above contents. ʻAxes indicates which graph position it is in a large number of graphs, but ʻaxes.ravel allows you to get it as a series of arrays. This is also a kind of reshape.

Recommended Posts

I became horror when I tried to detect the features of anime faces using PCA and NMF.
I tried to extract and illustrate the stage of the story using COTOHA
I tried to notify the update of "Hamelin" using "Beautiful Soup" and "IFTTT"
I tried to notify the update of "Become a novelist" using "IFTTT" and "Become a novelist API"
I want to know the features of Python and pip
I tried to get the index of the list using the enumerate function
I tried to predict the up and down of the closing price of Gurunavi's stock price using TensorFlow (progress)
I tried to transform the face image using sparse_image_warp of TensorFlow Addons
I tried to summarize until I quit the bank and became an engineer
I tried to get the batting results of Hachinai using image processing
I tried to visualize the age group and rate distribution of Atcoder
I tried to control multiple servo motors MG996R using the servo driver PCA9685.
I tried to verify and analyze the acceleration of Python by Cython
I tried the common story of using Deep Learning to predict the Nikkei 225
Using COTOHA, I tried to follow the emotional course of Run, Melos!
When I tried to write about logistic regression, I ended up finding the mean and variance of the logistic distribution.
I tried to deliver mail from Node.js and Python using the mail delivery service (SendGrid) of IBM Cloud!
I tried to touch the API of ebay
I tried to correct the keystone of the image
I tried using the image filter of OpenCV
I tried to predict the price of ETF
I tried to vectorize the lyrics of Hinatazaka46!
I tried to predict the deterioration of the lithium ion battery using the Qore SDK
[Python] I tried to judge the member image of the idol group using Keras
[Linux] I learned LPIC lv1 in 10 days and tried to understand the mechanism of Linux.
I tried to predict the victory or defeat of the Premier League using the Qore SDK
I tried to summarize the basic form of GPLVM
Python practice 100 knocks I tried to visualize the decision tree of Chapter 5 using graphviz
I tried to automate the article update of Livedoor blog with Python and selenium.
I tried to extract the text in the image file using Tesseract of the OCR engine
I tried to approximate the sin function using chainer
I tried using the API of the salmon data project
I tried to visualize the spacha information of VTuber
I tried to erase the negative part of Meros
I tried to redo the non-negative matrix factorization (NMF)
I tried to compare the processing speed with dplyr of R and pandas of Python
I tried to identify the language using CNN + Melspectogram
I tried to complement the knowledge graph using OpenKE
I tried to classify the voices of voice actors
I tried to compress the image using machine learning
I tried to summarize the string operations of Python
I tried the python version of "Consideration of Conner Davis's answer" Printing numbers from 1 to 100 without using loops, recursion, and goto "
I tried to get the number of days of the month holidays (Saturdays, Sundays, and holidays) with python
I tried to verify the yin and yang classification of Hololive members by machine learning
When I tried to install Ubuntu 18.04, "Initramfs unpacking failed: Decoding failed" was displayed and the startup failed.
I tried to predict the infection of new pneumonia using the SIR model: ☓ Wuhan edition ○ Hubei edition
Implementation of recommendation system ~ I tried to find the similarity from the outline of the movie using TF-IDF ~
I tried to automate the construction of a hands-on environment using IBM Cloud's SoftLayer API
[Horse Racing] I tried to quantify the strength of racehorses
I tried to get the location information of Odakyu Bus
I tried to find the average of the sequence with TensorFlow
I tried refactoring the CNN model of TensorFlow using TF-Slim
I tried to simulate ad optimization using the bandit algorithm.
I tried to get Web information using "Requests" and "lxml"
I tried face recognition of the laughter problem using Keras.
I tried to illustrate the time and time in C language
I tried to display the time and today's weather w
[Python] I tried to visualize the follow relationship of Twitter
[TF] I tried to visualize the learning result using Tensorboard
[Machine learning] I tried to summarize the theory of Adaboost
[Python] I tried collecting data using the API of wikipedia