Since PyTorch 1.7 supports RTX 3090, I will leave it as a memo of the environment construction.
Hard
GPU: RTX 3090 ← High
Soft
I want torch.cuda.is_available ()
to be True ...!
(If you don't have this work, please skip it)
Originally I was using GTX1080Ti, so I replaced the hardware first. I replaced it without uninstalling the driver. (I wonder if it was okay ...) For the time being, the environment at the time of GTX1080Ti is shown below.
And when I started it, the resolution was strange. Well, I think it's natural that something goes wrong with the original driver.
(If you don't need this work, skip it)
Use the following command in the terminal to erase the remaining nvidia driver, CUDA and cuDNN.
$ sudo apt remove --purge nvidia*
$ sudo apt remove --purge cuda*
$ sudo apt remove --purge libcudnn*
$ sudo apt remove --purge libnvidia*
$ sudo apt autoremove
Erase PyTorch etc. I managed it with pip, so I typed the following command.
$ pip uninstall torch
% pip uninstall torchvision
Just in case, let's restart the PC here.
$ sudo reboot
Use the following command to find out which nvidia-driver is right for your RTX 3090.
$ sudo ubuntu-drivers devices
Since nvidia-driver-455 was recommended, I installed it by typing the following command.
$ sudo apt install nvidia-driver-455 # 2021/01/As of 07
After the installation is complete, let's reboot here.
$ sudo reboot
Maybe the resolution is back to normal. Let's check the driver status etc. with nvidia-smi
.
$ nvidia-smi
When I checked with nvidia-smi
, it says CUDA 11.1 in the upper right. But PyTorch 1.7 is supported by CUDA 11.0.
I'm not sure, but let's install CUDA 11.0 ... I chose Linux> x86_64> Ubuntu> 18.04> deb (network) from the CUDA 11.0 Archive Site (https://developer.nvidia.com/cuda-11.0-update1-download-archive). Enter the command listed there.
###Please copy and paste the above site below.###
$ wget https://developer.download.nvidia.com/....
...
$ sudo apt-get update
###So far###
Note the command on the last line here.
If you do not specify the version as shown below, the latest version will be installed, so let's set it as cuda-11-0
.
$ sudo apt-get -y install cuda-11-0
If you don't go through the path, add the following to the last line of ~/.bashrc
.
$ sudo vi ~/.bashrc
~/.bashrc
## CUDA
export PATH="/usr/local/cuda/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:$LD_LIBRARY_PATH"
In vim, paste with [Ctrl] + [Shift] + [v], switch to NORMAL mode with the [Esc] key, and overwrite and save with : wq
.
You have now installed CUDA 11.0. Let's check with the following command.
$ nvcc -V
Select the latest version for CUDA 11.0 from cuDNN Archive Site. I chose v8.0.4. Among them,
I chose and downloaded it.
I will be downloaded to the ~/Downloads
directory, so
$ cd Downloads
Then, type the commands in the following order to install cuDNN.
$ sudo dpkg -i libcudnn8_8.0.4.30-1+cuda11.0_amd64.deb
$ sudo dpkg -i libcudnn8-dev_8.0.4.30-1+cuda11.0_amd64.deb
$ sudo dpkg -i libcudnn8-samples_8.0.4.30-1+cuda11.0_amd64.deb
Just in case, let's restart the PC here.
$ sudo reboot
PyTorch official website Type the command to install. As of January 07, 2021, the command was as follows.
$ pip install torch==1.7.1+cu110 torchvision==0.8.2+cu110 torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html
$ python
Python 3.6.7 (default, Sep 7 2020, 17:00:49)
[GCC 7.5.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import torch
>>> torch.cuda.is_available()
True
$ nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:09_PDT_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.TC445_37.28845127_0
$ nvidia-smi
Thu Jan 7 23:35:29 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.27.04 Driver Version: 460.27.04 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce RTX 3090 On | 00000000:65:00.0 On | N/A |
| 51% 67C P2 295W / 350W | 5855MiB / 24267MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
Why is it CUDA 11.2?
After this I trained the model with PyTorch and it worked fine. I wrote it in a messy way, so if you have any questions or mistakes, please comment etc. m (_ _) m
It's a convenient world.
Recommended Posts