This article
Although it is also in the title, it is a post about a record left by an amateur who is not a researcher of Deep Learning, so please forgive me for any mistakes and read it. (If there is something wrong, I would appreciate it if you could point it out in the comments)
OS:Ubuntu 14.04 LTS CPU: Core i7 2.93GHz GPU: GeForce GTX 960 Memory: 4GB
Basically, do as described on the Caffe official page. Here, the configuration is as follows, but please change it appropriately according to the environment. CUDA 7.0 use cudnn ATLAS Caffe (latest in github master) Make PyCaffe available
Obediently follow the formula
python
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
sudo apt-get install --no-install-recommends libboost-all-dev
This also obediently follows the formula
python
sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
What is ATLAS? It feels like, but a library for numerical calculation (probably matrix differentiation) Easy with Ubuntu
python
sudo apt-get install libatlas-base-dev
It says that you can use apt-get on Ubuntu, but from various points of view it seems better to take it from the nVidia official and put it in by yourself, so I decided to put it in by myself. (The latest seems to be 7.5, but I think it is the same as 7.0, so I will write it as it is)
From nvidia developer site, go to CUDA ZONE → CUDA DOWNLOADS to drop the CUDA installer. Then run the installer (replace xxx with the version you dropped)
python
chmod +x cuda_xxx_linux.run
sudo ./cuda_xxx_linux.run
All you have to do is follow the instructions on the screen
It doesn't seem to be required, but according to the official, it's faster to put it in, so put it in.
cuDNN Caffe: for fastest operation Caffe is accelerated by drop-in integration of NVIDIA cuDNN. To speed up your Caffe models, install cuDNN then uncomment the USE_CUDNN := 1 flag in Makefile.config when installing Caffe. Acceleration is automatic. The current version is cuDNN v3; older versions are supported in older Caffe.
Register as a developer from the above nvidia developer site
After registration, cudnn will be available for download, so download it from here. Then
python
sudo cp cudnn.h /usr/local/cuda/include/
sudo cp *.so /usr/local/cuda/lib64/
sudo cp *.a /usr/local/cuda/lib64/
Copy the library as In addition, the same symbolic link in the extracted file, Create it in / usr / local / cuda / lib64 (the following is 6.5 at the time of download)
python
cd /usr/local/cuda/lib64/
sudo ln -s libcudnn.so.6.5.48 libcudnn.so.6.5
sudo ln -s libcudnn.so.6.5 libcudnn.so
Now that we are ready, we will put it in immediately (assuming that the necessary items around python are prepared separately)
For some reason, it's not officially written here, but it seems that it is official to drop it from github.
git clone https://github.com/BVLC/caffe.git
As officially
python
for req in $(cat requirements.txt); do pip install $req; done
That's it. requirements.txt is in caffe / python I didn't have gfortran in my environment, so I installed scipy once.
python
error: Setup script exited with error: library dfftpack has Fortran sources but no Fortran compiler found
In that case, please insert gfortran.
python
sudo apt-get install gfortran
This is important. If you make a mistake here, you may try to build with something that doesn't exist, or you may get stuck in an error point. First, copy Makefile.config.example to create the prototype of Makefile.config.
python
cp Makefile.config.example Makefile.config
Make sure that Makefile.config looks like this: (Please change according to the environment) USE_CUDNN: = 1 (when using cudnn) CPU_ONLY: = 1 (comment out when using GPU) CUDA_DIR: = / usr / local / cuda (probably not needed for CPU_ONLY) None of the lines in CUDA_ARCH: = -gencode arch = compute ... are commented out (probably commented out if CPU_ONLY) BLAS := atlas PYTHON_INCLUDE := /usr/include/python2.7 \ /usr/lib/python2.7/dist-packages/numpy/core/include PYTHON_LIB := /usr/lib The rest remains as an example
It will be soon after the above is over. Make at the root of your project (It is OK without the j8 part. Please change the number appropriately according to the number of CPU cores)
python
make all -j8
I will do a final test to make sure it's done right here
python
make runtest
A log like the following lasts for a few minutes
python
[----------] 3 tests from DeconvolutionLayerTest/0, where TypeParam = caffe::CPUDevice<float>
[ RUN ] DeconvolutionLayerTest/0.TestSetup
[ OK ] DeconvolutionLayerTest/0.TestSetup (0 ms)
[ RUN ] DeconvolutionLayerTest/0.TestSimpleDeconvolution
[ OK ] DeconvolutionLayerTest/0.TestSimpleDeconvolution (1 ms)
[ RUN ] DeconvolutionLayerTest/0.TestGradient
[ OK ] DeconvolutionLayerTest/0.TestGradient (623 ms)
[----------] 3 tests from DeconvolutionLayerTest/0 (624 ms total)
Finally, if you get PASSED as follows, you are successful.
python
[==========] 1404 tests from 222 test cases ran. (221949 ms total)
[ PASSED ] 1404 tests.
If it fails, please check if the Makefile.config settings are correct and try again.
Thank you for your support. In the next article, I would like to actually run deep learning learning using Caffe.
Recommended Posts