Connect your GPU environment as a local runtime of Google Colaboratory with Windows 10 Docker Desktop (based WSL2) [Use Preview release]

Thing you want to do

  1. I want to avoid the GPU usage restrictions of Google Colaboratory: cry :: cold_sweat:: sob: スクリーンショット 2020-12-25 184352.png
  2. I want to try Docker Desktop WSL 2 GPU Support: whale2:

What are you happy about?

  1. Works on WSL2, so it also works on Windows 10 Home: blush:
  2. Colab usage restrictions do not stop learning: blush:
  3. You can use the Colab ecosystem even in a local GPU environment: blush: (Very convenient with Colab's github integration) スクリーンショット 2020-12-25 191633.png

approach

  1. Prepare a container with GPU environment on Windows 10 Docker Desktop (based WSL2)
  2. Port forwarding
  3. Connect the virtual environment as the local runtime of Google Colaboratry スクリーンショット 2020-12-25 191324.png 図1.png

Verification environment

  1. OS: Windows 10 Pro (OS build 20279.1)
  2. GPU: RTX 2070 Super
  3. WSL2 (Ubuntu 20.04 LTS)
  4. Docker Desktop (developer preview build of Docker Desktop for WSL 2 supporting GPU)
  5. CUDA (NVIDIA Drivers for CUDA on WSL, including DirectML Support)

Construction procedure

I mainly refer to This docker Blog.

Environment

  1. Get the technical preview build from Docker Desktop
  2. Install Windows Insider version @Dev preview and get the latest build of OS (As of December 26, 2020, ver.20279.1 is the latest)
  3. Introduction of WSL2
  4. Introduced Nvidia's CUDA supporting WSL 2 GPU Beta Driver
  5. Get the CUDA image by docker pull on WSL2
$ docker pull nvidia/cuda

Check if Docker can recognize the GPU

$ docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark

↓ If Docker Desktop (based WSL2) recognizes the GPU, you will get a result like this ↓

PS C:\Users\*****> docker run --rm -it --gpus=all nvcr.io/nvidia/k8s/cuda-sample:nbody nbody -gpu -benchmark
Run "nbody -benchmark [-numbodies=<numBodies>]" to measure performance.
        -fullscreen       (run n-body simulation in fullscreen mode)
        -fp64             (use double precision floating point values for simulation)
        -hostmem          (stores simulation data in host memory)
        -benchmark        (run benchmark to measure performance)
        -numbodies=<N>    (number of bodies (>= 1) to run in simulation)
        -device=<d>       (where d=0,1,2.... for the CUDA device to use)
        -numdevices=<i>   (where i=(number of CUDA devices > 0) to use for simulation)
        -compare          (compares simulation results running once on the default GPU and once on the CPU)
        -cpu              (run n-body simulation on the CPU)
        -tipsy=<file.bin> (load a tipsy model file for simulation)

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

> Windowed mode
> Simulation data stored in video memory
> Single precision floating point simulation
> 1 Devices used for simulation
MapSMtoCores for SM 7.5 is undefined.  Default to use 64 Cores/SM
GPU Device 0: "GeForce RTX 2070 SUPER" with compute capability 7.5

> Compute 7.5 CUDA device: [GeForce RTX 2070 SUPER]
40960 bodies, total time for 10 iterations: 59.828 ms
= 280.426 billion interactions per second
= 5608.513 single-precision GFLOP/s at 20 flops per interaction

Build a machine learning environment

Build your favorite environment and install Jupyter Notebook and the colab extension jupyter_http_over_ws. This time, use [Tensoflow image](https://hub.docker.com/r/tensorflow/tensorflow/tags? Page = 1 & ordering = last_updated) on DockerHub.

$ docker pull tensorflow/tensorflow:1.15.4-gpu-py3-jupyter

With the above command, you can get the environment of Tensoflow == 1.15.4 gpu version, python3, jupyter notebook all in. You can pull according to the environment you need. For tensorflow, you can select an image from DockerHub tensorflow/tensorflow. If you choose an image with a tag like tensorflow/tensorflow: ***-jupyter, you can save yourself the trouble of installing Jupyter Notebook.

Launching a virtual environment

Launch the image that was pull earlier ** In Google Colabolatry **, I added the setting of jupyter notebook at the time of run to specify the virtual environment as the local runtime. Reference source: stackoverflow

$ docker run -it --rm --gpus=all -p 8888:8888 tensorflow/tensorflow:1.15.4-gpu-py3-jupyter \
jupyter notebook --notebook-dir=/tf --ip 0.0.0.0 --no-browser --allow-root \
--NotebookApp.allow_origin='https://colab.research.google.com'

↓ Execution result ↓ Use http://127.0.0.1:8888/?token=**** to access ** Colaboratory ** as the local runtime.

PS C:\Users\****> docker run -it --rm --gpus=all -p 8888:8888 tensorflow/tensorflow:1.15.4-gpu-py3-jupyter jupyter notebook --notebook-dir=/tf --ip 0.0.0.0 --no-browser --allow-root --NotebookApp.allow_origin='https://colab.research.google.com'
[I 16:24:42.162 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
jupyter_http_over_ws extension initialized. Listening on /http_over_websocket
[I 16:24:42.372 NotebookApp] Serving notebooks from local directory: /tf
[I 16:24:42.372 NotebookApp] Jupyter Notebook 6.1.4 is running at:
[I 16:24:42.372 NotebookApp] http://42009a4c9bbb:8888/?token=*************************************************
[I 16:24:42.372 NotebookApp]  or http://127.0.0.1:8888/?token=*************************************************
[I 16:24:42.372 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[C 16:24:42.376 NotebookApp]

    To access the notebook, open this file in a browser:
        file:///root/.local/share/jupyter/runtime/nbserver-1-open.html
    Or copy and paste one of these URLs:
        http://42009a4c9bbb:8888/?token=*************************************************
     or http://127.0.0.1:8888/?token=******************************************************

Connect to Colaboratory

  1. Go to Google Colaboratory
  2. Select Connect to Local Runtime from Connect スクリーンショット 2020-12-26 013248.png
  3. Connect using the link http://127.0.0.1:8888/?token=**** obtained earlier Change to http: // ** localhost : 8888 /? token =** and enter (127.0.0.1 localhost) スクリーンショット 2020-12-26 013738.png
  4. Connection is complete! !! スクリーンショット 2020-12-26 014202.png
  5. Do the following in Colab to see if your machine learning library recognizes your GPU. (For Tensorflow)
from tensorflow.python.client import device_lib
device_lib.list_local_devices()

If you can recognize the GPU as shown below, you're done! !!

[name: "/device:CPU:0"
 device_type: "CPU"
 memory_limit: 268435456
 locality {
 }
 incarnation: 265271654709027711,
 name: "/device:XLA_CPU:0"
 device_type: "XLA_CPU"
 memory_limit: 17179869184
 locality {
 }
 incarnation: 12989474002923935858
 physical_device_desc: "device: XLA_CPU device",
 name: "/device:XLA_GPU:0"
 device_type: "XLA_GPU"
 memory_limit: 17179869184
 locality {
 }
 incarnation: 12219956629618833082
 physical_device_desc: "device: XLA_GPU device",
 name: "/device:GPU:0"
 device_type: "GPU"
 memory_limit: 7125876736
 locality {
   bus_id: 1
   links {
   }
 }
 incarnation: 1423521064690955886
 physical_device_desc: "device: 0, name: GeForce RTX 2070 SUPER, pci bus id: 0000:01:00.0, compute capability: 7.5"]

at the end

Recently (as of December 26, 2020), Win10 x WSL2 x Docker x CUDA has started to work, so try applying it to the local runtime connection of Google Colaboratory. I did. I think that various functions will be released in a major way after a while, so I think that the Environment construction column will be unnecessary.

Recommended Posts

Connect your GPU environment as a local runtime of Google Colaboratory with Windows 10 Docker Desktop (based WSL2) [Use Preview release]
Run Redmine in the local environment of Windows10 Pro-Use Docker Desktop for Windows
How to start with Hyper-V instead of WSL2 on Docker Desktop for Windows
The problem that the mount directory of Docker Desktop with WSL2 as the back end becomes strange in Windows Terminal (unsolved as of 2020/08/29?)
Comfortable development life with WSL2 + Docker Desktop + GPU
Self-hosting with Docker of AuteMuteUs in Windows environment
A memo when building a Rails 5.2 development environment using Docker Desktop + WSL2 on Windows 10 Home
Comfortable Docker environment created with WSL2 CentOS7 and Docker Desktop
Create Chisel development environment with Windows10 + WSL2 + VScode + Docker