# Introduction

[Deep Learning from scratch in Java] 2. There is no such thing as NumPy. ](Https://qiita.com/xaatw0/items/011168383246d4e95145) It is a continuation of. I finally entered the neural network and it became more like deep learning.

# Activation function

First, it is a common method that applies the specified function to all the elements of the two-dimensional array.

#### `ArrayUtil.java`

``````
public double[][] apply(double[][] x, DoubleUnaryOperator op){

validate(x);
double[][] result = new double[x.length][x[0].length];
for (int i = 0; i < result.length; i++){
for (int j = 0; j < result[0].length; j++){
result[i][j] = op.applyAsDouble(x[i][j]);
}
}

return result;
}
``````

It is P48 "3.2.4 Implementation of sigmoid function" of the book. Math.exp, which was good for Java.

#### `ArrayUtil.java`

``````
private DoubleUnaryOperator sigmoid = p ->1/(1+ Math.exp(-1 * p));
public double[][] sigmoid(double[][] x){
return apply(x, sigmoid);
}
``````

Next, P52 "3.2.7 ReLU Function".

#### `ArrayUtil.java`

``````
private DoubleUnaryOperator relu = p -> p < 0 ? 0 : p;
public double[][] relu(double[][] x){
return apply(x, relu);
}
``````

# neural network

P65 "3.4.3 Implementation Summary"

#### `ArrayUtilTest.java`

``````
public void newralnetwork(){
// init_network()
double[][] W1 = {{0.1, 0.3, 0.5},{0.2, 0.4, 0.6}};
double[]   b1 = {0.1, 0.2, 0.3};
double[][] W2 = {{0.1,0.4},{0.2,0.5},{0.3,0.6}};
double[]   b2 = {0.1, 0.2};
double[][] W3 = {{0.1,0.3},{0.2,0.4}};
double[]   b3 = {0.1, 0.2};

// x - np.array([1.0, 0.5])
double[][] x = {{1.0, 0.5}};

// a1 = np.dot(x,W1)+b1
double[][] a1 = target.plus( target.multi(x, W1), b1);

// z1 = sigmoid(a1)
double[][] z1 = target.sigmoid(a1);

assertThat(z1[0][0], is(closeTo(0.57444252, 0.00001)));
assertThat(z1[0][1], is(closeTo(0.66818777, 0.00001)));
assertThat(z1[0][2], is(closeTo(0.75026011, 0.00001)));

// a2 = np.dot(z1,W2)+b2
double[][] a2 = target.plus( target.multi(z1, W2), b2);

// z2 = sigmoid(a2)
double[][] z2 = target.sigmoid(a2);
assertThat(z2[0][0], is(closeTo(0.62624937, 0.00001)));
assertThat(z2[0][1], is(closeTo(0.7710107, 0.00001)));

// a3 = np.dot(z2,W3)+b3
double[][] a3 = target.plus( target.multi(z2, W3), b3);

// print(y) #[0.31682708,0.69627909]
assertThat(a3[0][0], is(closeTo(0.31682708, 0.00001)));
assertThat(a3[0][1], is(closeTo(0.69627909, 0.00001)));
}
``````

# Output layer design

Implemented the softmax function from P69 "3.5.2 Precautions for implementing the softmax function".

#### `ArrayUtil.java`

``````
public double[] softmax(double[] x){
double maxValue = Arrays.stream(x).max().getAsDouble();
double[] value = Arrays.stream(x).map(y-> Math.exp(y - maxValue)).toArray();
double total = Arrays.stream(value).sum();
return Arrays.stream(value).map(p -> p/total).toArray();
}

public double[][] softmax(double[][] x){
double[][] result = new double[x.length][];
for (int i = 0; i < result.length; i++){
result[i] = softmax(x[i]);
}
return result;
}
``````

#### `ArrayUtilTest.java`

``````
ArrayUtil target = new ArrayUtil();

@Test
public void softmax(){
double[] x = {1010, 1000, 990};
double[] expected = {9.99954600e-01, 4.53978686e-05, 2.06106005e-09};

double[] result = target.softmax(x);
assertThat(result[0], is(closeTo(expected[0], 0.00001)));
assertThat(result[1], is(closeTo(expected[1], 0.00001)));
assertThat(result[2], is(closeTo(expected[2], 0.00001)));

assertThat(Arrays.stream(result).sum(), is(closeTo(1, 0.00001)));
}
``````

# in conclusion

For the time being, we were able to implement the neural network and softmax function. From here on, you can understand it by itself, but you will not be able to understand what is going on as a whole. The output is as per the book, so you shouldn't make a mistake.