PyTorch C ++ (LibTorch) environment construction

** PyTorch, a type of deep learning framework, has a C ++ version as well as a Python version! ** **

This time, I will explain the procedure for building an environment on Ubuntu with "PyTorch C ++" or "LibTorch". For those who like deep learning, those whose main language is C ++, and those who work with embedded systems, it may be a way to go once! Also, even people who have never used C ++ should be able to understand it to some extent.

First, I will explain the procedure for creating and executing a simple source code, and then I will explain the procedure for executing using the actual source code (source code I wrote). ** The source code I wrote is uploaded to the following GitHub! </ font> ** https://github.com/koba-jon/pytorch_cpp

Then, I will explain the specific procedure.

Advance preparation

--Required --Ubuntu installation (18.04, 20.04)

--When using GPU --Installing the NVIDIA driver ([18.04](https://qiita.com/koba-jon/items/a7c5239fb5c05172c1b3#1-nvidia%E3%83%89%E3%83%A9%E3%82%A4%E3%" 83% 90% E3% 83% BC% E3% 81% AE% E3% 82% A4% E3% 83% B3% E3% 82% B9% E3% 83% 88% E3% 83% BC% E3% 83% AB))) --Installing CUDA ([18.04](https://qiita.com/koba-jon/items/a7c5239fb5c05172c1b3#2-cuda%E3%81%AE%E3%82%A4%E3%83%B3%E3%82 % B9% E3% 83% 88% E3% 83% BC% E3% 83% AB)) --Installing cuDNN ([18.04](https://qiita.com/koba-jon/items/a7c5239fb5c05172c1b3#3-cudnn%E3%81%AE%E3%82%A4%E3%83%B3%E3%82] % B9% E3% 83% 88% E3% 83% BC% E3% 83% AB))

Download LibTorch

PyTorch Official: https://pytorch.org/ Access the link and follow the steps below to download LibTorch.

libtorch1.png

  • PyTorchBuild
    Select ** "Stable" ** if you want to use the stable version, ** "Preview" ** if you want to use the latest version
  • Your OS
    I'm using Ubuntu this time, so select ** "Linux" **
  • Package
    Since LibTorch will be used this time, select ** "LibTorch" **
  • Language
    Since C ++ is used this time, select ** "C ++ / Java" **
  • CUDA
    Select ** "Version Name" ** if you use GPU, ** "None" ** if you do not use it
  • Run this Command
    Since C ++ 11 or later compiler is used, select ** "Download here (cxx11 ABI)" **

This completes the download of LibTorch itself. It is recommended to place the directory itself in an easy-to-understand location (such as directly under your home directory).

Operation check

First, create a ** temporary source code **.

$ mkdir libtorch_test
$ cd libtorch_test
$ vi main.cpp

Add the following code and save it. This is a program that creates a 3x3 matrix (second-order tensor) with Float type element 1.5 and displays it on the console screen.

main.cpp


#include <iostream>
#include <torch/torch.h>

int main(void){
    torch::Tensor x = torch::full({3, 3}, 1.5, torch::TensorOptions().dtype(torch::kFloat));
    std::cout << x << std::endl;
    return 0;
}

This time, we will use "Makefile" to easily build the source code. Also, create "CMakeLists.txt" to create the "Makefile".

$ vi CMakeLists.txt

The simplest CMakeLists.txt is: ** Copy and save ** this.

CMakeLists.txt


cmake_minimum_required(VERSION 3.0 FATAL_ERROR)

# Project Name
project(test CXX)

# Find Package
find_package(Torch REQUIRED)

# Create Executable File
add_executable(${PROJECT_NAME} main.cpp)
target_link_libraries(${PROJECT_NAME} ${TORCH_LIBRARIES})
set_property(TARGET ${PROJECT_NAME} PROPERTY CXX_STANDARD 17)

To briefly explain, the first line is "specify the version of cmake", the next is "specify the project name and C ++ compiler", the next is "specify the library to search", the next is "specify the file to compile", and the next is It means "specify the library" and finally "specify the version of the C ++ compiler".

Next, create a build directory and run cmake. Here, ** enter the path of the libtorch directory you downloaded earlier ** together. (If "~ / libtorch" is directly under your home directory)

$ mkdir build
$ cd build
$ cmake .. -DCMAKE_PREFIX_PATH=~/libtorch

This will create a Makefile, so run make to build it.

$ make

Finally, execute the executable file.

$ ./test

If the following is displayed, it is successful.

 1.5000  1.5000  1.5000
 1.5000  1.5000  1.5000
 1.5000  1.5000  1.5000
[ CPUFloatType{3,3} ]

** LibTorch's own environment construction is now complete </ font> **! After that, install the dependent library and set it so that the sample code can be executed.

Installation of dependent libraries

OpenCV

OpenCV is used for ** image data input / output **.

As for the installation method, this article was very easy to understand. Since it was implemented based on the cv :: Mat class, please install ** version 3.0.0 or higher </ font> **.

Boost

Boost is used for ** command line arguments, getting file names, etc. **.

Execute the following command to install it.

$ sudo apt install libboost-dev libboost-all-dev

Gnuplot

Gnuplot is used for ** to draw a graph of loss **.

Execute the following command to install it.

$ sudo apt install gnuplot

libpng

libpng is used for ** to read indexed color images for semantic segmentation **.

Execute the following command to install it.

$ sudo apt install libpng++-dev

Executing sample code

First, execute the following command to clone the sample code.

$ cd ~
$ git clone https://github.com/koba-jon/pytorch_cpp.git
$ cd pytorch_cpp

Path setting

Edit "CMakeLists.txt" to set the path of LibTorch.

$ vi utils/CMakeLists.txt

Earlier, I added "-DCMAKE_PREFIX_PATH = ~ / libtorch" to the argument of cmake and executed it, but it is troublesome to enter this every time. Therefore, embed the path directly in "CMakeLists.txt".

** Set the path ** of the "libtorch" directory where you downloaded "$ ENV {HOME} / libtorch" ** on the 4th line **. ** If there is "libtorch" directly under your home directory, you do not need to change it. </ font> **

CMakeLists.txt


3: # LibTorch
4: set(LIBTORCH_DIR $ENV{HOME}/libtorch)
5: list(APPEND CMAKE_PREFIX_PATH ${LIBTORCH_DIR})

This will automatically find the libtorch file when you build. By the way, my sample code cmake is a set of "CMakeLists.txt" of "utils" and "CMakeLists.txt" of each model directory. (This article was very easy to understand how to use cmake.)

Build

This time, I'm going to use a convolutional autoencoder, so move to that directory. Build the source code by executing the following command.

$ cd AE2d
$ mkdir build
$ cd build
$ cmake ..
$ make -j4
$ cd ..

Data set settings

This time, we will use the face image data of "Celeb A". Access the following site and click ** "Downloads> In-The-Wild Images-> Img-> img_align_celeba.zip" ** to download. http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html

Also, I want to divide the original data set into training data and test data, so set it. First, apply ** \ <dataset_path> to the "path of the downloaded celebA dataset" </ font> **, and execute the following command.

$ cd datasets
$ ln -s <dataset_path> ./celebA_org

Then run the following command to separate the data into training and testing. (By default, "Training: Test = 9: 1")

$ sudo apt install python3 python3-pip
$ pip3 install natsort
$ sh hold_out.sh
$ mv ./celebA/valid ./celebA/test
$ cd ..

Model training

Set the file for training.

$ vi scripts/train.sh

** You can select the number of epochs with epochs, the image size with size, the GPU number used with gpu_id (automatically switches to CPU mode if there is no GPU), and the number of image channels ** with nc. (For other options, add "--help" to the argument or see "main.cpp".)

Since this is a sample code test, let's change ** "epochs" to "1" and "size" to "64" **. By the way, this setting is exactly the same except for the loss function, compared to the training setting written in this article.

train.sh


#!/bin/bash

DATA='celebA'

./AE2d \
    --train true \
    --epochs 300 \ #Change to 1
    --dataset ${DATA} \
    --size 256 \ #Change to 64
    --batch_size 16 \
    --gpu_id 0 \
    --nc 3

Execute the following command to start training.

$ sh scripts/train.sh

The middle stage of training is like this. libtorch2.png

If the loss value does not decrease, it is a problem, but if not, I think that there is no particular problem.

Model testing

Set the file for the test.

$ vi scripts/test.sh

Since this is a sample code test, let's change ** "size" to "64" **. Also, by setting "test_in_dir" to the input image and "test_out_dir" to the Ground Truth path of the output image, it can be used for performance evaluation of denoising, but this time it will not be changed.

test.sh


#!/bin/bash

DATA='celebA'

./AE2d \
    --test true \
    --dataset ${DATA} \
    --test_in_dir "test" \
    --test_out_dir "test" \
    --size 256 \ #Change to 64
    --gpu_id 0 \
    --nc 3

Execute the following command to start the test.

$ sh scripts/test.sh

It is a state of the test. The loss on the left is the error between the output image and the input image, and the loss on the right is the error between the output image and its Ground Truth. libtorch3.png Finally, the average loss comes out and it's over. libtorch4.png

Release of source code

** The source code for this time is available on GitHub below! !! !! </ font> ** https://github.com/koba-jon/pytorch_cpp

I wrote not only autoencoders but also ** VAE, U-Net, pix2pix **, etc., so please give it a try! If you have any advice or suggestions for corrections, feel free to comment and it will be very helpful!

In addition, we have implemented tools that are useful for debugging, as shown in the loss graph below, so please take advantage of them! (It is created in the directory "checkpoints" added during training.) libtorch6.png

By the way, the flow of this process is as follows.

libtorch5.png

in conclusion

This time, I built the environment of PyTorch C ++ (LibTorch) on Ubuntu. We hope that it will be useful to developers, researchers, and anyone who is interested in and studying Deep Learning.

Thank you for reading the article until the end! Let's have a good development and research life!

Recommended Posts