I did machine learning (Scikit-learn, TensorFlow) at Mooc (Udacity, Coursera).
[Neural Networks for Machine Learning | Coursera](https://www.coursera.org/learn/ neural-networks/home/welcome)
Andrew Ng is famous, but Google's Deep Learning --Udacity is nice to handle with Scikit-learn or TensorFlow.
Since it was troublesome to prepare the environment, I used Docker
TensorFlow v1.0 was released, so I updated it
I want to update the Docker image to the latest in a batch --Qiita
The first task was "Let's go back to Logistic using Scikit-learn"
tensorflow/1_notmnist.ipynb at master · tensorflow/tensorflow
Scikit-learn
scikit-learn/scikit-learn: scikit-learn: machine learning in Python
sklearn.linear_model.LogisticRegression — scikit-learn 0.18.1 documentation
L1 Penalty and Sparsity in Logistic Regression — scikit-learn 0.18.1 documentation
v = [1,2,3]
w = [2,3,4]
[x for x in v if x in w] # [2,3]
It was about performing the same task in Tensorflow.
tensorflow/2_fullyconnected.ipynb at master · tensorflow/tensorflow
Assignment 2: 2 hidden layers and NaN loss - Courses / Deep Learning - Udacity Discussion Forum
I was satisfied because I made a 3-layer NN like this and achieved accuracy 94.2%. When I extract a part of the code, it looks like this.
def weight_variable(shape):
initial = tf.truncated_normal(shape, stddev=1.732/sum(shape))
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
batch_size = 128
hidden_layer_size = 1024
input_layer_size = 28*28
output_layer_size = 10
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32, shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables
weight1 = weight_variable( (input_layer_size, hidden_layer_size) )
bias1 = bias_variable( [hidden_layer_size] )
# Hidden Layer
hidden_layer = tf.nn.relu(tf.matmul(tf_train_dataset, weight1) + bias1)
# Variables
weight2 = weight_variable( (hidden_layer_size, output_layer_size) )
bias2 = bias_variable( [output_layer_size] )
logits = tf.matmul(hidden_layer, weight2) + bias2
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=tf_train_labels))
# optimizer
optimizer = tf.train.AdamOptimizer(0.001).minimize(loss)
# optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# prediction
train_prediction = tf.nn.softmax(logits)
valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weight1) + bias1), weight2) + bias2)
test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weight1) + bias1), weight2) + bias2)
Recommended Posts