[Introduction to Style GAN] Mayu Yu and anime smiled ♬

It is as the title. This time, I tried to see if Mayuyu and the anime face that I learned by myself would smile as shown below. 【reference】 Try mixing Yukichi Fukuzawa with StyleGan

From the conclusion, I got the following smile image and succeeded ♬ mayuyu_smile11.gif You will be healed ♬

What i did

・ Learn your own images ・ Generate a smile image and create a video ・ Try anime face images

・ Learn your own images

Mayuyu used the previous npy data same as last time. On the other hand, I tried to align the anime face image in the same way, but I could not align because the face was made too differently. However, I learned about anime face images as well. As a result, it seems that learning was possible, and I was able to obtain npy that can output the following images.

1 2
simple3_.png simple3_.png

As a matter of course, I also tried mixing using these images. However,. .. .. The result is as follows

Mixing in projected latent space simple3_.png
simple3_.png simple_method10_1_dr1024.gif

It seems that the mapping latent space has been polluted by humans. This tendency is similar when mixing with a human face. I will leave the picture unpasted this time.

・ Generate a smile image and create a video

The principle is as follows in simple terms.

latent_vector = np.load('./latent/mayuyu.npy')
coeff = [-1,-0.8,-0.6,-0.4,-0.2, 0,0.2,0.4,0.6,0.8, 1]
direction = np.load('ffhq_dataset/latent_directions/smile.npy')
new_latent_vector[:8] = (latent_vector + coeff*direction)[:8]
smile_image = generate_image(new_latent_vector)

Generate a smile image with this new new_latent_vector. So, the code will be as follows. I put the code below. StyleGAN/mayuyu_smile.py

import os
import pickle
import numpy as np
import PIL.Image
import dnnlib
import dnnlib.tflib as tflib
import config
import sys
from PIL import Image, ImageDraw
from encoder.generator_model import Generator
import matplotlib.pyplot as plt

tflib.init_tf()
fpath = './weight_files/tensorflow/karras2019stylegan-ffhq-1024x1024.pkl'
with open(fpath, mode='rb') as f:
    generator_network, discriminator_network, Gs_network  = pickle.load(f)

generator = Generator(Gs_network, batch_size=1, randomize_noise=False)

def generate_image(latent_vector):
    latent_vector = latent_vector.reshape((1, 18, 512))
    generator.set_dlatents(latent_vector)
    img_array = generator.generate_images()[0]
    img = PIL.Image.fromarray(img_array, 'RGB')
    return img.resize((256, 256))

def move_and_show(latent_vector, direction, coeffs):
    for i, coeff in enumerate(coeffs):
        new_latent_vector = latent_vector.copy()
        new_latent_vector[:8] = (latent_vector + coeff*direction)[:8]
        plt.imshow(generate_image(new_latent_vector))
        plt.pause(1)
        plt.savefig("./results/example{}.png ".format(i))
        plt.close()

mayuyu = np.load('./latent/mayuyu.npy')
smile_direction = np.load('ffhq_dataset/latent_directions/smile.npy')
move_and_show(mayuyu, smile_direction, [-1,-0.8,-0.6,-0.4,-0.2, 0,0.2,0.4,0.6,0.8, 1])

s=22
images = []
for i in range(0,11,1):
    im = Image.open(config.result_dir+'/example'+str(i)+'.png') 
    im =im.resize(size=(640,480), resample=Image.NEAREST)
    images.append(im)
for i in range(10,0,-1):
    im = Image.open(config.result_dir+'/example'+str(i)+'.png') 
    im =im.resize(size=(640, 480), resample=Image.NEAREST)
    images.append(im)     
images[0].save(config.result_dir+'/mayuyu_smile{}.gif'.format(11), save_all=True, append_images=images[1:s], duration=100*2, loop=0) 

In this way, Mr. Trump also smiles. trump_smile11.gif

・ Try anime face images

As mentioned above, npy was able to generate an anime face image, so use this to challenge your smile. The result is as follows, I'm laughing. .. .. Wonder anime_smile11.gif face_smile11.gif .. .. .. However, the problem is the mouth. .. .. It means that there is only something that seems to be

Summary

・ I tried learning anime images ・ I tried to learn "Mayu Yu and anime smiled"

・ Although it was almost done, I will try it with an anime image with a little more mouth. ・ Let's do so-called training of anime images ・ In the first place, I want to create an npy for operation in the same way as smile.npy.

Recommended Posts

[Introduction to Style GAN] Mayu Yu and anime smiled ♬
Understanding and implementing Style GAN
[Introduction to Python3 Day 1] Programming and Python
Introduction to Deep Learning ~ Convolution and Pooling ~
[Introduction to AWS] Text-Voice conversion and playing ♪
Introduction to Thano Function definition and automatic differentiation
[Introduction to StyleGAN2] Independent learning with 10 anime faces ♬
[Introduction to Python3 Day 22] Chapter 11 Concurrency and Networking (11.1 to 11.3)
[Introduction to Udemy Python3 + Application] 64. Namespace and Scope
[Introduction to Python3 Day 11] Chapter 6 Objects and Classes (6.1-6.2)
Introduction to Deep Learning ~ Localization and Loss Function ~
[Introduction to PID] I tried to control and play ♬