If you are studying machine learning and have a GPU, you can study faster and more efficiently. But it's too expensive to buy ...
There is also a Google Colaboratory, but I just can't become a Jupyter Notebook and want to work in my usual environment (a little selfish).
At that time, I found this article!
Go to ngrok and open LOGIN-> Authentication-> Your Auth token. Then, the auth token is displayed on the screen below, so copy it.
Open Google Colab, open Runtime-> Runtime Type Settings, and set the hardware accelerator to GPU.
Paste the following code into a code block and execute it.
At this time, paste the auth token you copied earlier in the place of YOUR AUTH TOKEN
on the 13th line.
# Install useful stuff
! apt install --yes ssh screen nano htop ranger git > /dev/null
# SSH setting
! echo "root:password" | chpasswd
! echo "PasswordAuthentication yes" > /etc/ssh/sshd_config
! echo "PermitUserEnvironment yes" >> /etc/ssh/sshd_config
! echo "PermitRootLogin yes" >> /etc/ssh/sshd_config
! service ssh restart > /dev/null
# Download ngrok
! wget -q -c -nc https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
! unzip -qq -n ngrok-stable-linux-amd64.zip
# Run ngrok
authtoken = "YOUR AUTHTOKEN"
get_ipython().system_raw('./ngrok authtoken $authtoken && ./ngrok tcp 22 &')
! sleep 3
# Get the address for SSH
import requests
from re import sub
r = requests.get('http://localhost:4040/api/tunnels')
str_ssh = r.json()['tunnels'][0]['public_url']
str_ssh = sub("tcp://", "", str_ssh)
str_ssh = sub(":", " -p ", str_ssh)
str_ssh = "ssh root@" + str_ssh
print(str_ssh)
If the execution is successful, the following will be displayed.
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
ssh [email protected] -p XXXXX
Try sshing to ssh [email protected] -p XXXXX
using VS Code's Remote-SSH
feature. You will be asked for a password, but you can type password
specified in! Echo "root: password" | chpasswd
.
Make main.py
etc. appropriately and check that GPU can be used with the following code
import torch
print(torch.cuda.is_available())
# True
I was able to use it!
CIFAR10 I verified it with the task of learning. The code I actually used is here.
The following environment was used for comparison, and it was carried out with 5 epochs.
With GPU | No GPU | |
---|---|---|
OS | Ubuntu 18.04.3 LTS | macOS Catalina v.10.15.5 |
torch | 1.4.0 | 1.5.1 |
It was lazy that I couldn't match the torch version.
speed | |
---|---|
With GPU | 256.162 |
No GPU | 320.412 |
The speed is about 20% faster!
In Colab, environment construction such as Pytorch and Keras is pre-built, so that is also convenient!
Recommended Posts