Problème d'allocation de mémoire du GPU Tensorflow. je pense que
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
# Restrict TensorFlow to only use the first GPU
try:
tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPU")
except RuntimeError as e:
# Visible devices must be set before GPUs have been initialized
print(e)
Il a été guéri en changeant à ↓ ci-dessous.
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
Détails (document officiel de Tensorflow)
--Exécution de la commande: Gestion des erreurs [Impossible de créer le descripteur cudnn: CUDNN_STATUS_INTERNAL_ERROR] --TF1 series: Que faire si vous obtenez une erreur indiquant Impossible de créer le handle de cudnn: CUDNN_STATUS_INTERNAL_ERROR dans TensorFlow
Recommended Posts