Dealing with Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR in Tensorflow v2.x

environment

version

Cause

GPU memory allocation issue in Tensorflow. I think that

Coping

If you are limiting memory allocation with the following code

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  # Restrict TensorFlow to only use the first GPU
  try:
    tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
    logical_gpus = tf.config.experimental.list_logical_devices('GPU')
    print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPU")
  except RuntimeError as e:
    # Visible devices must be set before GPUs have been initialized
    print(e)

It was cured by changing to ↓ below.

gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
  try:
    # Currently, memory growth needs to be the same across GPUs
    for gpu in gpus:
      tf.config.experimental.set_memory_growth(gpu, True)
    logical_gpus = tf.config.experimental.list_logical_devices('GPU')
    print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
  except RuntimeError as e:
    # Memory growth must be set before GPUs have been initialized
    print(e)

Details (Tensorflow Official Doc)

other

--Command execution: Error handling [Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR] --TF1 series: What to do if you get an error like Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR in TensorFlow

Recommended Posts

Dealing with Could not create cudnn handle: CUDNN_STATUS_INTERNAL_ERROR in Tensorflow v2.x
Dealing with Tensorflow error "Import Error: DLL load failed: Specified module not found" in deep learning
How to not escape Japanese when dealing with json in python
Dealing with tensorflow suddenly stopped working using GPU in deep learning
Ensure reproducibility with tf.keras in Tensorflow 2.3
Until dealing with python in Atom
Dealing with key not found error in pacstrap when installing Arch Linux