Learn Paints Chainer on Macbook Pro-From operation

On MacbookPro (non-Cuda environment), I trained PaintsChainera to learn new data and made it work by myself.

Execution environment

MacbookPro(Early 2015)

The environment is turned off with virtualenv on pyenv. Below is a list of pips.

Preparation

Directory structure

Each file was arranged with the following structure.

~/PaintsChainer/
├── 2010_06.png
├── 2010_06_correcte_by_changing_save_as_img.jpg
├── 2010_06_new_BGR2YUV.JPG
├── 2010_06_new_changed_color_RGB-2-BGR-by-IrfanView.JPG
├── IMG_6731.jpg
├── README.md
├── cgi-bin
│   └── paint_x2_unet
│       ├── __init__.py
│       ├── __pycache__
│       │   ├── cgi_exe.cpython-36.pyc
│       │   ├── img2imgDataset.cpython-36.pyc
│       │   ├── lnet.cpython-36.pyc
│       │   └── unet.cpython-36.pyc
│       ├── cgi_exe.py
│       ├── dat
│       │   └── images_color_train.dat
│       ├── images
│       │   ├── color
│       │   │   ├── 1.jpg
│       │   │   ├── 2.jpg
│       │   │   ...
│       │   │   └── N.jpg
│       │   ├── colorx2
│       │   │   ├── 1.jpg
│       │   │   ├── 2.jpg
│       │   │   ...
│       │   │   └── N.jpg
│       │   ├── line
│       │   │   ├── 1.jpg
│       │   │   ├── 2.jpg
│       │   │   ...
│       │   │   └── N.jpg
│       │   ├── linex2
│       │   │   ├── 1.jpg
│       │   │   ├── 2.jpg
│       │   │   ...
│       │   │   └── N.jpg
│       │   └── original
│       │       ├── 1.jpg
│       │       ├── 2.jpg
│       │       ...
│       │       └── N.jpg
│       ├── img2imgDataset.py
│       ├── lnet.py
│       ├── models
│       │   ├── model_cnn_128
│       │   ├── old
│       │   │   ├── unet_128_standard
│       │   │   └── unet_512_standard
│       │   ├── unet_128_standard
│       │   └── unet_512_standard
│       ├── result
│       │   └── log
│       ├── result1
│       │   ├── cg.dot
│       │   ├── model_final
│       │   └── optimizer_final
│       ├── result2
│       │   ├── cg.dot
│       │   ├── model_final
│       │   └── optimizer_final
│       ├── tools
│       │   ├── image2line.py
│       │   ├── resize.py
│       │   └── run.sh
│       ├── train_128.py
│       ├── train_128_mod.py
│       ├── train_x2.py
│       ├── train_x2_mod.py
│       └── unet.py
├── clouds.jpg
├── cv2_demo_cvt.py
├── edge-detection.ipynb
├── log
├── server.py
└── static
    ├── bootstrap
    │   ├── css
    │   │   ├── bootstrap-responsive.css
    │   │   ├── bootstrap-responsive.min.css
    │   │   ├── bootstrap.css
    │   │   └── bootstrap.min.css
    │   ├── img
    │   │   ├── glyphicons-halflings-white.png
    │   │   └── glyphicons-halflings.png
    │   └── js
    │       ├── bootstrap.js
    │       └── bootstrap.min.js
    ├── images
    │   ├── line
    │   │   ├── ERS39TVYS8OKJOGLA3X29GWR3K4QOD9B.png
    │   │   ├── F5RIILKVA2O5YTUD4AO4EGCM9CQ9E2NP.png
    │   │   ├── HLMP9DNIJ3JYKTQ8PEEN7TMCCDNJJ3GW.png
    │   │   ├── L0QH3YOCJ342JQD6OXE9ED534FOC9YMS.png
    │   │   ├── L5VJOBTLIA21YWYF2S7LOPMJ67A1UMOA.png
    │   │   ├── OC2B63AMFXW14NGG9VOA0TK2NRDP1GF3.png
    │   │   └── P07S79RSSSRSES72DP9ITAQSPJNA8M1V.png
    │   ├── out
    │   │   ├── ERS39TVYS8OKJOGLA3X29GWR3K4QOD9B_0.jpg
    │   │   ├── F5RIILKVA2O5YTUD4AO4EGCM9CQ9E2NP_0.jpg
    │   │   ├── HLMP9DNIJ3JYKTQ8PEEN7TMCCDNJJ3GW_0.jpg
    │   │   ├── L0QH3YOCJ342JQD6OXE9ED534FOC9YMS_0.jpg
    │   │   ├── L5VJOBTLIA21YWYF2S7LOPMJ67A1UMOA_0.jpg
    │   │   ├── OC2B63AMFXW14NGG9VOA0TK2NRDP1GF3_0.jpg
    │   │   └── P07S79RSSSRSES72DP9ITAQSPJNA8M1V_0.jpg
    │   ├── out_min
    │   │   ├── ERS39TVYS8OKJOGLA3X29GWR3K4QOD9B_0.png
    │   │   ├── F5RIILKVA2O5YTUD4AO4EGCM9CQ9E2NP_0.png
    │   │   ├── HLMP9DNIJ3JYKTQ8PEEN7TMCCDNJJ3GW_0.png
    │   │   ├── L0QH3YOCJ342JQD6OXE9ED534FOC9YMS_0.png
    │   │   ├── L5VJOBTLIA21YWYF2S7LOPMJ67A1UMOA_0.png
    │   │   ├── OC2B63AMFXW14NGG9VOA0TK2NRDP1GF3_0.png
    │   │   └── P07S79RSSSRSES72DP9ITAQSPJNA8M1V_0.png
    │   └── ref
    │       ├── ERS39TVYS8OKJOGLA3X29GWR3K4QOD9B.png
    │       ├── F5RIILKVA2O5YTUD4AO4EGCM9CQ9E2NP.png
    │       ├── HLMP9DNIJ3JYKTQ8PEEN7TMCCDNJJ3GW.png
    │       ├── L0QH3YOCJ342JQD6OXE9ED534FOC9YMS.png
    │       ├── L5VJOBTLIA21YWYF2S7LOPMJ67A1UMOA.png
    │       ├── OC2B63AMFXW14NGG9VOA0TK2NRDP1GF3.png
    │       └── P07S79RSSSRSES72DP9ITAQSPJNA8M1V.png
    ├── index.html
    ├── interactive_ui.html
    ├── paints_chainer.js
    └── wPaint
        ├── lib
        │   ├── jquery.1.10.2.min.js
        │   ├── jquery.ui.core.1.10.3.min.js
        │   ├── jquery.ui.draggable.1.10.3.min.js
        │   ├── jquery.ui.mouse.1.10.3.min.js
        │   ├── jquery.ui.widget.1.10.3.min.js
        │   ├── mixins.styl
        │   ├── wColorPicker.min.css
        │   └── wColorPicker.min.js
        ├── plugins
        │   ├── file
        │   │   ├── img
        │   │   │   └── icons-menu-main-file.png
        │   │   ├── src
        │   │   │   └── wPaint.menu.main.file.js
        │   │   └── wPaint.menu.main.file.min.js
        │   ├── main
        │   │   ├── img
        │   │   │   ├── cursor-bucket.png
        │   │   │   ├── cursor-crosshair.png
        │   │   │   ├── cursor-dropper.png
        │   │   │   ├── cursor-eraser1.png
        │   │   │   ├── cursor-eraser10.png
        │   │   │   ├── cursor-eraser2.png
        │   │   │   ├── cursor-eraser3.png
        │   │   │   ├── cursor-eraser4.png
        │   │   │   ├── cursor-eraser5.png
        │   │   │   ├── cursor-eraser6.png
        │   │   │   ├── cursor-eraser7.png
        │   │   │   ├── cursor-eraser8.png
        │   │   │   ├── cursor-eraser9.png
        │   │   │   ├── cursor-pencil.png
        │   │   │   ├── icon-group-arrow.png
        │   │   │   └── icons-menu-main.png
        │   │   ├── src
        │   │   │   ├── fillArea.min.js
        │   │   │   └── wPaint.menu.main.js
        │   │   └── wPaint.menu.main.min.js
        │   ├── shapes
        │   │   ├── img
        │   │   │   └── icons-menu-main-shapes.png
        │   │   ├── src
        │   │   │   ├── shapes.min.js
        │   │   │   └── wPaint.menu.main.shapes.js
        │   │   └── wPaint.menu.main.shapes.min.js
        │   └── text
        │       ├── img
        │       │   └── icons-menu-text.png
        │       ├── src
        │       │   └── wPaint.menu.text.js
        │       └── wPaint.menu.text.min.js
        ├── wPaint.min.css
        └── wPaint.min.js

Tools

Create resize.py, image2line.py, run.sh in ~ / PaintsChainer / cgi-bin / paint_x2_unet / tools referring to http://qiita.com/ikeyasu/items/6c1ebed07b281281b1f6 did.

resize.py is not created separately for 128px and 512px, and the size can be specified by the argument.

resize.py


if __name__ == "__main__":
    parser = argparse.ArgumentParser(description='clipping and resize')
    parser.add_argument('--input', '-i', default='input.jpg', help='input file')
    parser.add_argument('--output', '-o', default='output.jpg', help='output file')
+   parser.add_argument('--size', '-s', default='128', help='size')
    args = parser.parse_args()
+   main(args.input, args.output, int(args.size))
-   main(args.input, args.output)

Since there was a path omission in the description of the reference source in run.sh, it was corrected as follows. Also, specify the size with an argument due to the modification of resize.py.

run.sh


ls -v1 ../images/original/ | parallel -j 8  'echo {}; python resize.py -i ../images/original/{} -o ../images/color/{} -s 128'
ls -v1 ../images/original/ | parallel -j 8  'echo {}; python image2line.py -i ../images/color/{} -o ../images/line/{}'
ls -v1 ../images/original/ | parallel -j 8  'echo {}; python resize.py -i ../images/original/{} -o ../images/colorx2/{} -s 512'
ls -v1 ../images/original/ | parallel -j 8  'echo {}; python image2line.py -i ../images/colorx2/{} -o ../images/linex2/{}'

Learner

Refer to http://qiita.com/ikeyasu/items/6c1ebed07b281281b1f6 and http://blog.livedoor.jp/abars/archives/52397019.html Fixed train_128.py and train_x2.py so that it can be executed without GPU.

train_128.py


-     serializers.load_npz("models/liner_f", l)
+     #serializers.load_npz("models/liner_f", l)

-     chainer.serializers.save_npz(os.path.join(out_dir, 'model_final'), cnn)
-     chainer.serializers.save_npz(os.path.join(out_dir, 'optimizer_final'), opt)
+     # chainer.serializers.save_npz(os.path.join(out_dir, 'model_final'), cnn)
+     # chainer.serializers.save_npz(os.path.join(out_dir, 'optimizer_final'), opt)
+     chainer.serializers.save_npz(os.path.join(args.out, 'model_final'), cnn)
+     chainer.serializers.save_npz(os.path.join(args.out, 'optimizer_final'), opt)

train_x2.py


-     serializers.load_npz("models/model_cnn_128_dfl2_9", cnn_128)
+     # serializers.load_npz("models/model_cnn_128_dfl2_9", cnn_128)
+     serializers.load_npz("models/model_cnn_128", cnn_128)

-     chainer.serializers.save_npz(os.path.join(out_dir, 'model_final'), cnn)
-     chainer.serializers.save_npz(os.path.join(out_dir, 'optimizer_final'), opt)
+     # chainer.serializers.save_npz(os.path.join(out_dir, 'model_final'), cnn)
+     # chainer.serializers.save_npz(os.path.join(out_dir, 'optimizer_final'), opt)
+     chainer.serializers.save_npz(os.path.join(args.out, 'model_final'), cnn)
+     chainer.serializers.save_npz(os.path.join(args.out, 'optimizer_final'), opt)

-         x_out = x_out.data.get()
+         # x_out = x_out.data.get()
+         x_out = x_out.data

To learn

Images to be trained were placed in ~ / PaintsChainer / cgi-bin / paint_x2_unet / images / original, and cutouts and line arts were extracted.

$ cd ~/PaintsChainer/cgi-bin/paint_x2_unet/dat
$ sudo ./run.sh
$ ls ../images
color		colorx2		line		linex2		original

../images/color:
1.jpg 2.jpg ... N.jpg

../images/colorx2:
1.jpg 2.jpg ... N.jpg

../images/line:
1.jpg 2.jpg ... N.jpg

../images/linex2:
1.jpg 2.jpg N.jpg

../images/original:
1.jpg 2.jpg ... N.jpg

$ cd ../images/original/
$ ls -v1 > ../../dat/images_color_train.dat

When the training data was ready, train_128.py and train_x2.py were executed respectively. Since we don't use Cuda, we specified -1 for the GPU option (-g -1).

$ python train_128_mod.py -g -1 --dataset ./images/ -e 20 -o result1
$ cp result1/model_final models/model_cnn_128
$ python train_x2_mod.py -g -1 --dataset ./images/ -e 20 -o result2
$ cp models/unet_128_standard models/unet_128_standard.old
$ cp models/unet_512_standard models/unet_512_standard.old
$ cp result1/model_final models/unet_128_standard
$ cp result2/model_final models/unet_512_standard

Set 100 images, start learning at epoch = 20 (-e 20), and go home. When I check the next morning ... I haven't finished learning!

After reducing the scale to 3 images and epoch = 3 (-e 3) and retrying, The model file (model_final) was output in about 3 minutes at 128x128 and about 10 minutes at 512x512.

Launch CGI

The GPU option specified -1. I accessed http: // localhost: 8000 / static / and confirmed that the app works.

$ cd ~/PaintsChainer/
$ python server.py -g -1

Conclusion

As expected, learning in a non-Cuda environment required a huge amount of time and was not realistic. If you show this result, you can get a budget for a high-spec Cuda machine! ···May.

Recommended Posts

Learn Paints Chainer on Macbook Pro-From operation
YOLO on Chainer
Run Paints Chainer on CPU with official python on win10
Implemented ESPnet on Macbook
Install Chainer 1.5.0 on Windows
Install Chainer on CentOS 6.7
OpenPose on MacBook Pro
Linux operation on Win10