If you try to use Keras with the back end as TensorFlow, the default setting is to use all the GPU memory and you can not run multiple experiments, so I will introduce the setting method to reduce the GPU memory usage. [^ 1] [^ 2]
tensorflow==1.3.0
tensorflow-gpu==1.3.0
Keras==2.0.6
You can paste the code below or import it.
It can be set with gpu_options.allow_growth
.
It is a method of allocating only the required amount at the time of execution, and the memory area is expanded when it is needed further.
However, since the memory is not released automatically, the memory may become fragmented and the performance may deteriorate, so be careful. [^ 1]
import tensorflow as tf
from keras import backend as K
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
K.set_session(sess)
It can be set with gpu_options.per_process_gpu_memory_fraction
.
In the example below, 40% of memory is used.
import tensorflow as tf
from keras import backend as K
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.4
sess = tf.Session(config=config)
K.set_session(sess)
$ nvidia-smi -l
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 30654 C python 3527MiB |
| 1 30779 C python 3357MiB |
+-----------------------------------------------------------------------------+
References
Recommended Posts