Click here for what you need
CUDA Version: 10.0 Nvidia Driver Version: 430.50 cuDNN v7.4.2 (Dec 14, 2018), for CUDA 10.0
CUDA → Nvidia Driver → cuDNN It seems that it is necessary to do it in the order of.
Open .bash_aliase in your home directory (create it if you don't have one) and fill in the following:
# .bash_aliase
export PATH=/usr/local/cuda/bin:$PATH
export CPATH=/usr/local/cuda/include:$CPATH
export LIBRARY_PATH=/usr/local/cuda/lib:$LIBRARY_PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib:$LD_LIBRARY_PATH
Load it with the following command.
source ~/.bash_aliase
CUDA
#First install the downloaded package
sudo dpkg -i cuda-repo-ubuntu1804-10-0-local-10.0.130-410.48_1.0-1_amd64.deb
#Add cuda to the repository to update and install the list.
sudo apt-key adv –fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub
sudo apt-get update
sudo apt-get install cuda-toolkit-10-0
Reboot when you're done.
Nvidia Driver
sudo apt-get install nvidia-driver-430
Reboot when you're done.
cuDNN
sudo dpkg -i libcudnn7_7.4.2.24-1+cuda10.0_amd64.deb
sudo dpkg -i libcudnn7-dev_7.4.2.24-1+cuda10.0_amd64.deb
sudo dpkg -i libcudnn7-doc_7.4.2.24-1+cuda10.0_amd64.deb
echo -e "\n## CUDA and cuDNN paths" >> ~/.bashrc
echo 'export PATH=/usr/local/cuda-10.0/bin:${PATH}' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-10.0/lib64:${LD_LIBRARY_PATH}' >> ~/.bashrc
why
sudo apt install nvidia-driver-430
When I hit, 440 entered.
http://people.cs.uchicago.edu/~kauffman/nvidia/cudnn/
CuDNN compatible with cuda v10.2
For confirmation
cat /proc/driver/nvidia/version
modinfo nvidia
If you run nvidia-smi and the following message is displayed, it may be in conflict with the graphic driver Nouveau installed by default in Ubuntu.
$ sudo vim /etc/modprobe.d/blacklist-nouveau.conf
blacklist-nouveau.conf
blacklist nouveau
options nouveau modeset=0
Next, execute the following command to reload the kernel module.
$ sudo update-initramfs -u
Restart your PC and if you can run nvidia-smi, proceed to install CUDA.
If there is a lot of training data or batch_size is large, it will stop when executed, but with the default setting, it will try to allocate all the GPU memory required for training first, so it will eat up all the GPU memory and an error will occur. This is because it ends up. So, instead of allocating memory first, change to allocate as much GPU memory as can be acquired for each learning.
import tensorflow as tf
from keras import backend as K
import tensorflow as tf
from keras.backend import tensorflow_backend
config = tf.ConfigProto(gpu_options=tf.GPUOptions(allow_growth=True))
session = tf.Session(config=config)
tensorflow_backend.set_session(session)
reference https://yurufuwadiary.com/tensorflow-rtx2080super-installation
Recommended Posts