Lua version Deep Learning from scratch Part 1 [Implementation of Perceptron] Lua version Deep Learning from scratch 2 [Activation function] Lua version Deep Learning from scratch Part 3 [Implementation of 3-layer neural network] [Lua version Deep Learning from scratch 4 [Implementation of softmax function]] (http://qiita.com/Kazuki-Nakamae/items/20e53a02a8b759583d31) Lua version Deep Learning from scratch Part 5 [Display MNIST image]
You can do it with the following script.
pkl2npz.py
#!/usr/local/bin/python3
# coding: utf-8
"""
Output the contents of the pkl file to the npz file.
"""
__author__ = "Kazuki Nakamae <[email protected]>"
__version__ = "0.00"
__date__ = "22 Jun 2017"
import sys
import numpy as np
import pickle
def pkl2npz(infn, outfn):
"""
@function pkl2npz();
Output the contents of the pkl file to the npz file.
@param {string} infn :Input file name
@param {string} outfn :Output file name
"""
with open(infn, 'rb') as f:
ndarr = pickle.load(f)
np.savez(outfn, W1=ndarr['W1'],W2=ndarr['W2'],W3=ndarr['W3'],b1=ndarr['b1'],b2=ndarr['b2'],b3=ndarr['b3'])
if __name__ == '__main__':
argvs = sys.argv
argc = len(argvs)
if (argc != 3): # Checking input
print("USAGE : python3 pkl2npz.py <INPUT_PKLFILE> <OUTPUT_NPZFILE>")
quit()
pkl2npz(str(argvs[1]),str(argvs[2]))
quit()
pkl2npz.Run py
$ python3 pkl2npz.py sample_weight.pkl sample_weight.npz
Now let's load the created sample_weight.npz on Lua. No hassle. The following people have created a package (npy4th) for that purpose. htwaijry/npy4th
npy4th installation
$ git clone https://github.com/htwaijry/npy4th.git
$ cd npy4th
$ luarocks make
It's easy to use and you can load it just by inserting loadnpz ([filename]).
loadnpz()How to use
npy4th = require 'npy4th'
-- read a .npz file into a table
tbl = npy4th.loadnpz('sample_weight.npz')
print(tbl["W1"])
Output result
Columns 1 to 6
-7.4125e-03 -7.9044e-03 -1.3075e-02 1.8526e-02 -1.5346e-03 -8.7649e-03
-1.0297e-02 -1.6167e-02 -1.2284e-02 -1.7926e-02 3.3988e-03 -7.0708e-02
-1.3092e-02 -2.4475e-03 -1.7722e-02 -2.4240e-02 -2.2041e-02 -5.0149e-03
-1.0008e-02 1.9586e-02 -5.6170e-03 3.8307e-02 -5.2507e-02 -2.3568e-02
(Omitted)
1.1210e-02 1.0272e-02
-1.2299e-02 2.4070e-02
7.4309e-03 -4.0211e-02
[torch.FloatTensor of size 784x50]
Torch is not popular in Japan, but I think it would be easier to use if numpy resources could be used in this way. that's all. Thank you very much.
Recommended Posts