Windows10 + WSL2 + DockerDesktop + docker-compose + GPU (Nvidia) + Jupyterlab environment construction

Introduction

This article is a report that describes what I did to build the environment as the title says. It is unknown whether it is stable because the environment used is Windows 10, Docker Desktop, and Docker-compose, which are preview versions.

Since we were able to build the environment for the title by collecting information inside and outside Qiita, we will describe the procedure for building the environment.

** * The contents are as of January 10, 2021. ** ** Please note that DockerDesktop, WSL2, and GPU (nvidia) are updated quickly.

** * Added on January 20, 2021 ** 1.28.0 of Docker-compose has been released. docker-compose.yml works fine as it is in this article. Github Release Page

Without Qiita, the environment could not be built. I hope there is someone who can be helpful.

Constitution

OS: Windows 10 Pro Insider Preview Build: 21286.1000 CPU:Ryzen5900X GPU:GeForce RTX 3070 WSL2:ubuntu 20.04LTS Docker Desktop: Technical Preview Docker:version 20.10.0 Docker-compose:1.28.0

Construction procedure

  1. Install Windows10 Insider Preview Build
  2. Install WSL2
  3. Install the preview version of Docker Desktop
  4. Install beta version of Nvidia driver
  5. Pull the NGC container image
  6. Start the container, install the required packages, and then create the image
  7. Check if GPU is valid in the created image
  8. Create a docker-compose yml file and launch a container with docker-compose

Step 1: Install Windows 10 Insider Preview Build

Select Update & Security, Windows Insider Program from Windows Settings Select "Dev Channel" in "Dev Channel/Beta/Release Preview"

Step 2: Install WSL2

Install WSL2 by referring to the following article.

When executing the following command in Windows Power shell

PowerShell


wsl --set-default-version 2

Kernel component updates are required to run WSL 2. See https://aka.ms/wsl2kernel for more information

If is displayed, refer to the following article "Update kernel components" and later. WSL2 introduction | Screenshot from Win update to WSL2 default

Step 3: Install a preview version of Docker Desktop

Docker official blog that states that GPU is supported

To get started with Docker Desktop with Nvidia GPU support on WSL 2, you will need to download our technical preview build from here.

Install the preview version of Docker Desktop from here

Step 4: Install the beta version of the Nvidia driver

Install the beta version of the NVIDIA driver from the link below https://developer.nvidia.com/cuda/wsl/download

gefo.png

NVIDIA Developer Program Membership Required

The file or page you have requested requires membership in the NVIDIA Developer Program. Please either log in or join the program to access this material. Learn more about the benefits of the NVIDIA Developer Program.

Is displayed. If you are registered for the NVIDIA Developer Program Membership, please log in to download and install the driver.

Step 5: Pull the NGC container image

The image to be pulled can be anything, but since there is a Docker image (NGC container) provided by Nvidia, that is pulled. You can select Tensorflow or Pytorch as the framework.

From the links below, select TensorFlow and look for the latest Docker image tag https://www.nvidia.com/ja-jp/gpu-cloud/containers/

NGC.png

NGCコンテナ.png

Pull the latest image with the following command

bash


#For TensorFlow1
$docker pull nvcr.io/nvidia/tensorflow:20.12-tf1-py3
#                                     ↑(20.12) ← I think this will change for each ver.
#For TensorFlow2
$docker pull nvcr.io/nvidia/tensorflow:20.12-tf2-py3

Step 6: Create a Docker image after launching the container and installing the required packages

Although TensorFlow and Keras are already installed Because packages such as pandas, matplotlib, and Seaborn are not installed After launching the container and installing the required packages, create a new image.

After launching the container of the image pulled earlier in the terminal of WSL2 Install the required packages.

bash


#Execute the following command in the WSL2 terminal
$docker run -it nvcr.io/nvidia/tensorflow:20.12-tf2-py3 bash

bash


#-------In the container--------------
================
== TensorFlow ==
================

NVIDIA Release 20.12-tf2 (build 18110405)
TensorFlow Version 2.3.1

Container image Copyright (c) 2020, NVIDIA CORPORATION.  All rights reserved.
Copyright 2017-2020 The TensorFlow Authors.  All rights reserved.

#~~~Omission~~~~

root@9f1a8350d911:/workspace#pip install Required packages(like matplotlib)
#Stop the container once the required packages have been installed
root@9f1a8350d911:/workspace#exit

bash


#-------WSL2 terminal from here--------------
#Get the container ID with the ps command
$docker ps -a
CONTAINER ID   IMAGE                                     COMMAND                  CREATED         STATUS                     PORTS     NAMES
5dc0ae8981bf   nvcr.io/nvidia/tensorflow:20.12-tf2-py3   "/usr/local/bin/nvid…"   3 minutes ago   Exited (0) 3 seconds ago             confident_noether

#After confirming the container ID, create an image with the commit command
#When pushing the name of the image to Dockerhub etc., it is necessary to match the name with the repository. If you just use it locally, you can use any name you like.)
$docker commit 5dc0ae8981bf hogehoge:latast 
#If you can create the image without any problem, the following screen will be output.
sha256:8461579f0c2adf2a052b2b30625df0f48d81d3ab523635eb97b360f03096b4

#Check images with docker images command
#If there is no problem, the following output
$docker images
REPOSITORY                  TAG             IMAGE ID       CREATED        SIZE
hogehoge                    latest          9d3ea0900f00   29 hours ago   13.4GB
nvcr.io/nvidia/tensorflow   20.12-tf2-py3   21d1065bfe8f   5 weeks ago    12.2GB

Step 7: Create a container with the created image and check if the GPU is valid

** * To enable GPU, "--gpus all" is required as an option of docker run command. ** **

bash


#Execute the following command in the WSL2 terminal
#--rm is an option to automatically delete when the container is stopped.
docker run -it --rm --gpus all -p 8888:8888 hogehoge:latest jupyter lab


#---From here in the container---
================
== TensorFlow ==
================

NVIDIA Release 20.12-tf2 (build 18110405)
TensorFlow Version 2.3.1

#~~~Omission~~~
 To access the notebook, open this file in a browser:
        file:///root/.local/share/jupyter/runtime/nbserver-1-open.html
    Or copy and paste one of these URLs:
        http://hostname:8888/?token=[token]
     or http://127.0.0.1:8888/?token=[token]

#Ctrl on WSL terminal to exit+Run C

Access http : //127.0.0.1:8888/? Token = [token] with a web browser (I am Chrome) because it is output as above. Now that you can connect to Jupyterlab, run the following code in your new notebook device_type: GPU is enabled if "GPU" is displayed

from tensorflow.python.client import device_lib
device_lib.list_local_devices()

#If GPU is enabled, output as below
[name: "/device:CPU:0"
 device_type: "CPU"
 memory_limit: 268435456
 locality {
 }
 incarnation: 16078152362305136132,
 name: "/device:XLA_CPU:0"
 device_type: "XLA_CPU"
 memory_limit: 17179869184
 locality {
 }
 incarnation: 6904616874393552950
 physical_device_desc: "device: XLA_CPU device",
 name: "/device:XLA_GPU:0"
 device_type: "XLA_GPU"
 memory_limit: 17179869184
 locality {
 }
 incarnation: 13161252575635162092
 physical_device_desc: "device: XLA_GPU device",
 name: "/device:GPU:0"
 device_type: "GPU"
 memory_limit: 5742592000
 locality {
   bus_id: 1
   links {
   }
 }
 incarnation: 2330595400288827072
 physical_device_desc: "device: 0, name: GeForce RTX 3070, pci bus id: 0000:2b:00.0, compute capability: 8.6"]

#--If gpu is not attached to the option, or if something goes wrong and gpu is not recognized, the output will be as follows.
[name: "/device:CPU:0"
 device_type: "CPU"
 memory_limit: 268435456
 locality {
 }
 incarnation: 68886281224950509,
 name: "/device:XLA_CPU:0"
 device_type: "XLA_CPU"
 memory_limit: 17179869184
 locality {
 }
 incarnation: 13575954317913527773
 physical_device_desc: "device: XLA_CPU device"]

Step 8: Use docker-compose

If you create a container with the docker run command, there is no problem up to step 7. I would like to use docker-compose because it is difficult to type long commands every time.

The stable version of docker-compose does not support --gpus all runtime: It is a method to use "nvidia".

Reference article: How to use gpu with docker-compose Run GPU in docker-compose (as of February 2, 2020) issue on github issue on github

~~ When I was at a loss, I heard that the preview version of docker-copmose supported GPU, so I will go that way. ~~ ~~ Github Release Page ~~ ~~ ↑ ver is 1.28.0-rc1, but the latest preview version is 1.28.0-rc2, so install that. ~~

How to verup docker-compose See docker-compose formula

bash


#Do the following at the WSL terminal
$sudo curl -L "https://github.com/docker/compose/releases/download/1.28.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

#If you can download it, do the following
$sudo chmod +x /usr/local/bin/docker-compose

#docker-Check compose ver
$docker-compose -version
#If there is no problem, the following is output
docker-compose version 1.28.0-rc2, build f1e3c356

After installing the preview version, create docker-compose.yml. See also: Github Enable GPU access with Device Requests

docker-compose.yml


version: '3.7'
services:
  jupyterlab:
    image: hogehoge:latest #List the image name you created
    deploy:
      resources:
        reservations:
          devices:
          - 'driver': 'nvidia'
            'capabilities': ['gpu']
    container_name: jupyterlab
    ports:
      - '8888:8888'
    #↓ Mount the host and container volumes Host volume: Notation of container volume
    #./Is the current directory, so docker in the WSL2 terminal-compose.Move to the directory where yml is located before using
  volumes:
      - './ds_python:/workspace' #Change the folder name appropriately
    command: jupyter lab
    tty: true
    stdin_open: true

Once docker-compose.yml is created Make the following directory structure

└── ./Appropriate folder
    ├──── docker-compose.yml(Directly under a suitable folder)
        └── ds_python(ds_Store the files you want to bring to the container under python)
          ├── ~~~~.py 
          ├── ~~~~.ipynb 

Once the directory is configured, do the following in the WSL terminal

bash


#Do the following at the WSL terminal
#docker-compose.Move to the directory with yml
$cd /mnt/c/~

#If you can move the directory docker-Start container with compose-d is carried out in the background
$docker-compose up -d
#If there is no problem, the following is output
Creating network "docker_default" with the default driver
Creating jupyterlab ... done

#I don't know the address of jupyter lab, so check below
$ docker logs jupyterlab
#Since the same contents as the docker run command are output,
#Web browser(I'm chrome)At http://127.0.0.1:8888/?token=[token]Access to

#If you want to delete the container, docker-End with compose down
$docker-compose down

After accessing Jupyterlab, if the GPU device is displayed by the same procedure as the docker-run command, it is completed.

in conclusion

I've researched various things, but it seems that the construction procedure will soon become obsolete due to the field where updates are quick.

Reference article

** Reference articles in steps 1, 3 and 4 ** Comfortable development life with WSL2 + Docker Desktop + GPU https://www.docker.com/blog/wsl-2-gpu-support-is-here/

** Reference article in step 2 ** Use WSL 2 + Docker on Windows 10 Home (https://qiita.com/KoKeCross/items/a6365af2594a102a817b) WSL2 introduction | Screenshot from Win update to WSL2 default

** Reference article in step 5 ** I tried NGC (nVIDIA GPU Cloud)! (Outside Qiita)

** Reference article in step 6 ** Docker command frequently used Create Docker container, start-stop Create an image from a Docker working container to make porting easier

** Reference article in step 7 ** [Docker] Create a jupyterLab (python) environment in 3 minutes! Build a Jupyter environment in 5 minutes using Docker Check if GPU can be recognized from TensorFlow with 2 lines code (outside Qiita)

** Reference article in step 8 ** How to use gpu with docker-compose Run GPU in docker-compose (as of February 2, 2020) Docker-compose formula Launch jupyter with docker-compose 1 I explained how to write docker-compose.yml Configuration management tool "Docker Compose" that automatically launches multiple Docker containers (Let's use the latest Docker functions: Part 7)

Recommended Posts

Windows10 + WSL2 + DockerDesktop + docker-compose + GPU (Nvidia) + Jupyterlab environment construction
[Windows] WSL2 + Ubuntu + Node.js environment construction
Penronse environment construction [Windows]
Create RUNTEQ's environment with Windows DockerDesktop
Troublesome Rails environment construction flow [Windows 10]
GPU environment construction with Docker [October 2020 version]
Using JupyterLab + Java with WSL on Windows 10
Rails API server environment construction using docker-compose
[Personal memo] Ruby on Rails environment construction (Windows)
Django development environment construction using Docker-compose (personal memorandum)