** * Added on May 12, 2016 This article is completely deprecated because it has been merged into the official Tensorflow repository. ** ** -> OSX GPU is now supported in Tensorflow Well, the installation method doesn't change much.
** * Because it is unofficial, we cannot take responsibility no matter what happens. ** **
I remembered that my Macbook Pro (Retina, 15-inch, Mid 2014) had a NVIDIA GeForce GT 750M
in addition to the ʻIntel Iris Pro, so I said," 3.0 on Cuda. GPU on Tensorflow. I thought, and when I was looking for a way, there were people who were already doing it overseas, so I will leave the procedure when I referred to it. However, it returns to
ver0.6.0 Since it uses bazel, those who have installed
pip,
Virtualenv ,
docker` may want to install it from the source first.
Maybe the kind you can go
Model number | GPU |
---|---|
iMac (21-inch, Late 2012) | NVIDIA GeForce GT 640M |
iMac (21-inch, Late 2012) | NVIDIA GeForce GT 650M |
iMac (27-inch, Late 2012) | NVIDIA GeForce GTX 660MX |
iMac (27-inch, Late 2012) | NVIDIA GeForce GTX 675MX |
iMac (27-inch, Late 2012) | NVIDIA GeForce GT 680M |
iMac (21.5-inch, Late 2013) | NVIDIA Geforce GT 750M |
iMac (27-inch, Late 2013) | NVIDIA Geforce GT 755M |
iMac (27-inch, Late 2013) | NVIDIA Geforce GTX 775M |
iMac (27-inch, Late 2013) | NVIDIA Geforce GTX 780M |
MacBook Pro (15-inch, Mid 2012) MacBook Pro (Mid 2012) MacBook Pro (15-inch, Early 2013) |
NVIDIA GeForce GT 650M |
MacBook Pro (15-inch, Late 2013) MacBook Pro (15-inch, Mid 2014) |
NVIDIA GeForce GT 750M |
Reference source: Fabrizio Milo @ How to compile tensorflow with CUDA support on OSX
First, install Cuda. I was brew
.
$ brew upgrade
$ brew install coreutils
$ brew cask install cuda
Check the version(7.5.Should be 20)
$ brew cask info cuda
Download the library libCudnn
from NVIDIA. (Registration required)
https://developer.nvidia.com/cudnn.
This is the version I downloaded: cudnn-7.0-osx-x64-v4.0-prod.tgz
Move the contents of the downloaded one to the corresponding location of lib
and ʻinclude in
/ usr / local / cuda / `, respectively.
Add the path to .bash_profile
$ vim ~/.bash_profile
export DYLD_LIBRARY_PATH=/usr/local/cuda/lib:$DYLD_LIBRARY_PATH
Bring Pull Request # 644 to enable GPU on OS X in Tensorflow repository
$ cd tensorflow
$ git fetch origin pull/664/head:cuda_osx
$ git checkout cuda_osx
Reinstall Tensorflow
$ TF_UNOFFICIAL_SETTING=1 ./configure
WARNING: You are configuring unofficial settings in TensorFlow. Because some external libraries are not backward compatible, these setting
s are largely untested and unsupported.
Please specify the location of python. [Default is /usr/local/bin/python]:
Do you wish to build TensorFlow with GPU support? [y/N] Y
GPU support will be enabled for TensorFlow
Please specify the Cuda SDK version you want to use. [Default is 7.0]: 7.5
Please specify the location where CUDA 7.5 toolkit is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify the Cudnn version you want to use. [Default is 6.5]: 4
Please specify the location where cuDNN 4 library is installed. Refer to README.md for more details. [Default is /usr/local/cuda]:
Please specify a list of comma-separated Cuda compute capabilities you want to build with.
You can find the compute capability of your device at: https://developer.nvidia.com/cuda-gpus.
Please note that each additional compute capability significantly increases your build time and binary size.
[Default is: "3.5,5.2"]: 3.0
Setting up Cuda include
Setting up Cuda lib
Setting up Cuda bin
Setting up Cuda nvvm
Configuration finished
$ bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer
$ bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
$ bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
$ pip install /tmp/tensorflow_pkg/tensorflow-In my tmp-whl
Confirmation test.py
import tensorflow as tf
# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print sess.run(c)
If the error says Reason: image not found
, it seems that the Cuda library cannot be found, so check the path.
$ export DYLD_LIBRARY_PATH=/usr/local/cuda/lib:$DYLD_LIBRARY_PATH
You should be able to go with this.
Let's try measuring the processing speed with the CNN of "Hiranaga MAIST".
。。。
That.
I prepared the image considering the punch line **, but it got faster. ** **
About 52 minutes with CPU
CPU-MAIST.py
i 19900, training accuracy 1 cross_entropy 0.205204
test accuracy 0.943847
elapsed_time:3312.28295398[sec]
GPU-MAIST.py
i 19900, training accuracy 1 cross_entropy 0.0745807
test accuracy 0.945042
elapsed_time:1274.27083302[sec]
About 21 minutes with GPU.
If various other applications are running, the GPU memory may be quite small in the Free memory:
part of the log that appears during execution.
If it is too small, an error will occur due to insufficient memory. It's a laptop computer, so I can't help it.
It will recover as soon as you drop the application or restart it, so that's right.
Recommended Posts