This is the third post to qiita. (article3)
Continuing from last time, while I was using nnabla, I managed to feel like "I wish I had this kind of information in qiita" Summary of what I found in nnabla reference and dir ()
(a standard python function that returns member variables and functions of arguments) I will.
· OS: macOS Catalina (version 10.15.1) ・ Python: 3.5.4 ・ Nnabla: 1.3.0
The sample network is defined below. (Up to this point, previous same)
article3_add_layer.py
import nnabla as nn
import nnabla.functions as F
# [define network]
x = nn.Variable()
y = F.add_scalar(x, 0.5) # <-- (1)far
y = F.mul_scalar(y, -2)
It is simply in the form of $ y = (x + 0.5) \ times2 $.
Last time $ y = (x + 0.5) \ times2 $ above $ y = (x + 0.5) ^ 2 \ times2 using the contents explained above I will show you how to change it to $. The code is below.
article3_add_layer.py
# [get middle variable]
h1 = y.parent.inputs[0]
additional_layer = F.pow_scalar(h1, 2.0)
redefine_layer = F.mul_scalar(additional_layer, **y.parent.info.args)
# [rewire_on]
y.rewire_on(redefine_layer)
The operation check was done below just before rewire_on above, with print in between.
article3_add_layer.py
def print_func(f):
print('{} output = {}'.format(f.name, f.outputs[0].d))
# [print & forward]
x.d.fill(0)
y.forward()
print('--- before ---')
y.visit(print_func)
print('y.d = {}'.format(y.d))
print('')
# [rewire_on]
y.rewire_on(redefine_layer)
# [print & forward]
y.forward()
print('--- after ---')
y.visit(print_func)
print('y.d = {}'.format(y.d))
print('')
output
--- before ---
AddScalar output = 0.5
MulScalar output = -1.0
y.d = -1.0
--- after ---
AddScalar output = 0.5
PowScalar output = 0.25
MulScalar output = -0.5
y.d = -0.5
h1 = y.parent.inputs [0]
gets the part (1).to the output of mul_scalar, mul_scalar itself will be overwritten and disappear, so
redefine_layer = F.mul_scalar (additional_layer, ** y.parent.info.args)` will be used as the existing one. Redefine the exact same mul_scalar layer and overwrite the existing mul_scalar to get the desired behavior.y.parent.info.args
represents the mul_scalar layer in the y.parent
part, and .info.args
gets the arguments given to that layer. That is, you can use it to define the exact same mul_scalar layer as an existing mul_scalar layer.y.rewire_on (redefine_layer)
, overwrite the output node redefine_layer
of the layer redefined on the calculated graph with y
to complete the desired operation.rewire_on
? , What is each output? I output that. As a layer, pow_scalar has increased, and I think the numbers are as per the formula.I introduced how to insert a new layer. You can also use it to insert a quantization layer into the output of each activation of an existing trained model, or convolve Convolution + Batch Normalization into a single Convolution. Next time, I will touch on this area.
Recommended Posts