This article is a report written by the author, Yamada, to become a pillar. Since Windows 10 and Docker Destop in your environment are preview versions, it may not always be possible to provide stable performance. However, the more people you can participate in the preview, the more feedback you will get. We cannot recommend it, but please refer to this article and try it. (Contradiction)
Until now, Docker on WSL2 was able to recognize the GPU. (Reference article: It seems that you can finally run WSL2 + docker + GPU, so give it a try) However, I had to install Docker and Nvdia Container Toolkit on Linux running on WSL2, which was a bit of a hassle. I was looking for a way to use GPU with Docker Desktop.The other day, on December 21, 2020, a preview version of Docker Desktop that supports GPU with WSL2 was released on Official blog of Docker Desktop. I learned that.
According to the official procedure
--Install the Windows10 Insider Preview Build Dev channel --Install the preview version of Docker Desktop --Install beta version of Nvidia driver that supports WSL 2 GPU paravirtualization --Enable WSL2 backend in Docker Desktop
And the procedure was very easy so I decided to give it a try
The following work is also required for the procedure to recognize the GPU with Docker on the conventional WSL2 mentioned above.
--Install the preview version of Docker Desktop --Install beta version of Nvidia driver that supports WSL 2 GPU paravirtualization
There are many articles written by our ancestors, so please refer to that as well. (Reference article: Waiting for CUDA on WSL 2)
From Windows Settings, select Update & Security → Windows Insider Program, although there are Dev/Beta/Release previews and channels Select ** Dev Channel ** to install. (Installation takes time.)
Make sure the version and build have changed once installed. (OS build as of 12/24 12:00)To get started with Docker Desktop with Nvidia GPU support on WSL 2, you will need to download our technical preview build from here.
Click * here * to Download & install
Install the NVIDIA Driver from the link below. https://developer.nvidia.com/cuda/wsl Select the driver according to the GPU you are using. I chose the left because I am GTX 1070 (poor).
Please note that if you have not registered (free) for the NVIDIA Developer Program Membership, you will not be able to download unless you register and log in.
From Docker Desktop settings Check Use the WSL 2 based engine
That's all for the settings! Yes it's over! Right? Isn't it easy? ?? ?? ?? ?? ??
I think many people are worried that it will work so easily. So I will test it for the time being. Enter the following command on the wsl2 terminal
docker run --gpus all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
The following is the output
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
-fullscreen (run n-body simulation in fullscreen mode)
-fp64 (use double precision floating point values for simulation)
-hostmem (stores simulation data in host memory)
-benchmark (run benchmark to measure performance)
-numbodies=<N> (number of bodies (>= 1) to run in simulation)
-device=<d> (where d=0,1,2.... for the CUDA device to use)
-numdevices=<i> (where i=(number of CUDA devices > 0) to use for simulation)
-compare (compares simulation results running once on the default GPU and once on the CPU)
-cpu (run n-body simulation on the CPU)
-tipsy=<file.bin> (load a tipsy model file for simulation)
NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.
> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
GPU Device 0: "GeForce GTX 1070" with compute capability 6.1
> Compute 6.1 CUDA device: [GeForce GTX 1070]
15360 bodies, total time for 10 iterations: 11.862 ms
= 198.895 billion interactions per second
= 3977.901 single-precision GFLOP/s at 20 flops per interaction
You can read it properly! Check Tensorflow for the time being
docker run -it --rm \
--gpus all \
--user root \
--name tensorflow \
-v $(pwd):/work/ \
-w /work \
tensorflow/tensorflow:latest-gpu-py3
GPU will not load without --gpus all
Start python and enter the following command
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
The following is the output
root@107eb1201f59:/work# python
Python 3.6.9 (default, Nov 7 2019, 10:44:02)
[GCC 8.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from tensorflow.python.client import device_lib
vice_lib.list_local_devices()
2020-12-24 02:47:13.959067: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libnvinfer.so.6
~~~(Omission)~~~
2020-12-24 02:47:15.627128: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1241] Created TensorFlow device (/device:GPU:0 with 6835 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:02:00.0, compute capability: 6.1)
[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 7571817354991796130
, name: "/device:XLA_CPU:0"
device_type: "XLA_CPU"
memory_limit: 17179869184
locality {
}
incarnation: 1853203790404720057
physical_device_desc: "device: XLA_CPU device"
, name: "/device:XLA_GPU:0"
device_type: "XLA_GPU"
memory_limit: 17179869184
locality {
}
incarnation: 15262914756957364681
physical_device_desc: "device: XLA_GPU device"
, name: "/device:GPU:0"
device_type: "GPU"
memory_limit: 7167590400
locality {
bus_id: 1
links {
}
}
incarnation: 16971010008119807580
physical_device_desc: "device: 0, name: GeForce GTX 1070, pci bus id: 0000:02:00.0, compute capability: 6.1"
]
>>>
device: 0, name: GeForce GTX 1070, you can read the GPU properly.
With this, you can have a comfortable development life even on Windows! Awesome!
-It seems that you can finally run WSL2 + docker + GPU, so I will try it -Waiting for CUDA on WSL 2 -Check if GPU can be recognized from TensorFlow with 2-line code
Recommended Posts