Note that I got stuck when using Maxout + CNN (Convolutional Neural Network) with Pylearn2. GPU is required.
The following are the settings and modifications required in the Windows environment. It may not be needed on Linux or Mac, but I haven't confirmed it.
Since pthread is used inside pylearn2, install it. Without pthread, the following error message will be output during learning.
pylearn2 \ sandbox \ cuda_convnet \ nvmatrix.cuh (39) fatal error C1083: Unable to open include file. 'pthread.h': No such file or directory
This time, I downloaded the library from the following site. https://sourceware.org/pthreads-win32/
Set the path to the pthread library in pylearn2 / sandbox / cuda_convnet / pthreads.py. An example is given below.
pthreads.py
from theano.configparser import AddConfigVar, StrParam
AddConfigVar('pthreads.inc_dir',
"location of pthread.h",
StrParam("C:\\lib\\pthreads-w32-2-9-1-release\\Pre-built.2\\include"))
AddConfigVar('pthreads.lib_dir',
"location of library implementing pthreads",
StrParam("C:\\lib\\pthreads-w32-2-9-1-release\\Pre-built.2\\lib\\x64"))
AddConfigVar('pthreads.lib',
'name of the library that implements pthreads (e.g. "pthreadVC2" if using pthreadVC2.dll/.lib from pthreads-win32)',
StrParam("pthreadVC2"))
Place the pthreadVC2.dll contained in the POSIX thread in the following directory: pylearn2/scripts/papers/maxout
If you do not deploy pthreadVC2.dll, the following error will occur. This is an error that occurs because the dll referenced in the pyd file cannot be found.
DLL load failed:
If you get the same error, you can use Dependency Walker at the following site to check the dependency of pyd. http://www.dependencywalker.com/
I compile a cu file when learning with Pylearn2, but a link error occurs. Modify the following files to be able to load the cuda_ndarray library.
pylearn2/sandbox/cuda_convnet/convnet_compile.py
compiler.compile_str('cuda_convnet',
code,
location = cuda_convnet_loc,
include_dirs = [this_dir, config.pthreads.inc_dir] if config.pthreads.inc_dir else [this_dir],
lib_dirs = nvcc_compiler.rpath_defaults + [cuda_convnet_loc] + ([config.pthreads.lib_dir] if config.pthreads.lib_dir else []),
- libs = ['cublas', config.pthreads.lib] if config.pthreads.lib else ['cublas'],
+ libs = ['cublas', 'cuda_ndarray', config.pthreads.lib] if config.pthreads.lib else ['cublas', 'cuda_ndarray'],
preargs = ['-O3'] + args,
py_module=False)
pylearn2/sandbox/cuda_convnet/base_acts.py
def c_libraries(self):
if config.pthreads.lib:
- return ['cuda_convnet', config.pthreads.lib]
+ return ['cuda_convnet', 'cuda_ndarray', config.pthreads.lib]
else:
- return ['cuda_convnet']
+ return ['cuda_convnet', 'cuda_ndarray']
I used mnist.yaml in pylearn2 / scripts / papers / maxout of Pylearn2. The pylearn2.models.maxout.MaxoutConvC01B used in this file is the Maxout + CNN model class. GPU is required to use MaxoutConvC01B. After navigating to pylearn2 / scripts / papers / maxout, learn with the following command.
python ..\..\train.py mnist.yaml
The trained model is saved in mnist_best.pkl.
Create the following file and python mnist_result.py mnist_best.pkl You can check the recognition result for the test data by executing. It was 9940/10000 in my environment.
mnist_result.py
import numpy as np
import pickle
import sys
import theano
import pylearn2.datasets.mnist as mnist
from pylearn2.space import VectorSpace
def simulate(inputs, model):
space = VectorSpace(inputs.shape[1])
X = space.get_theano_batch()
Y = model.fprop(space.format_as(X, model.get_input_space()))
f = theano.function([X], Y)
result = []
for x in xrange(0, len(inputs), 100):
result.extend(f(inputs[x:x + 100]))
return result
def countCorrectResults(outputs, labels):
correct = 0;
for output, label in zip(outputs, labels):
if np.argmax(output) == label:
correct += 1
return correct
def score(dataset, model):
outputs = simulate(dataset.X, model)
correct = countCorrectResults(outputs, dataset.y)
return {
'correct': correct,
'total': len(dataset.X)
}
model = pickle.load(open(sys.argv[1]))
test_data = mnist.MNIST(which_set='test')
print '%(correct)d / %(total)d' % score(test_data, model)
After learning with mnist.yaml, if you train with mnist_continued.yaml, it seems that you will learn more about mnist_best.pkl. The trained model is saved in mnist_continued.pkl.
Recommended Posts