This article is the 24th day article of Yuyushiki Advent Calender 2019.
Last year's Advent Calender tried automatic coloring of black and white cartoons with CycleGAN. (Try to color manga with CycleGAN-Yuyushiki as an example-) CycleGAN is good at style conversion such as changing colors and textures, but it is not good at shape conversion.
Therefore, this year, using U-GAT-IT, which is a GAN that realizes both style conversion and shape conversion, a Yuyushiki-style illustration from a face photo Let's generate.
The image looks like this ↓ (Quoted from mantan-web)
U-GAT-IT can also perform shape conversion, which CycleGAN was not good at.
The following are the results published by the author in the paper. The leftmost photo is the original photo, but you can see that the changes that accompany the shape conversion of cats to dogs are also done well.
All figures are taken from U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation
The learning was done at Google Colaboratory. To learn, from this repository, paste it into the notebook cell in the order of util-> ops-> UGATIT-> main.
However, when executing with Notebook, parser will cause an error, so change it as follows.
#Comment out the following
#parser = argparse.ArgumentParser(description=desc)
#Omitted
#parser.add_argument(...)
#Add the following
!pip install easydict
import easydict
args = easydict.EasyDict({
'phase': 'train',
'light': False,
'dataset': 'yuyu',
###The following is omitted
})
Place the dataset directly under the current directory as shown below.
└─ dataset
└─ yuyu
├─ trainA #Photo of a woman for learning(Diverted from selfie2anime)
├─ trainB #Yuyushiki character pictures for learning
├─ testA #For testing(ry
└─ testB #For testing(ry
Now, when you're ready, start learning!
This model is very large, and even with a dataset of only about 100 sheets, the error that it does not fit in memory frequently occurred. .. In such a case, if you set'light' in args to True, it will learn with the light version. (It's a trade-off with accuracy, but ...) This time, I learned 1500 epoch of 100 iteration, which is a little overfitting.
** ○ Result ** Original photo / conversion result photo.
It's a Yuyushiki-style painting, but it's subtle ...
This time, I didn't get the results I expected. I'm sorry I couldn't bring out the performance of the model well.
I think the main factor is that the training data set is too small. I'm feeling the limits of resources. ..
(I want someone to make a Yuyushiki dataset)
Tomorrow is the last day.
Have a nice Christmas, everyone!
Recommended Posts