TensorFlow A machine learning library made by Google, announced in November 2015. It is said to read "tensor flow". It is actually used in the company's service.
Some people have written various things, but since the head family is the best, I will try to put it in immediately https://www.tensorflow.org/versions/r0.8/get_started/os_setup.html
#Preparations
sudo easy_install pip
sudo easy_install --upgrade six
sudo pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.8.0-py2-none-any.whl
sudo pip install --upgrade virtualenv
sudo virtualenv --system-site-packages ~/tensorflow
source ~/tensorflow/bin/activate
#Install TensorFlow
pip install --upgrade https://storage.googleapis.com/tensorflow/mac/tensorflow-0.8.0-py2-none-any.whl
source ~/tensorflow/bin/activate
python
First of all, do the prepared test to python
>>> import tensorflow as tf
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello
<tf.Tensor 'Const:0' shape=() dtype=string>
>>> sess = tf.Session()
>>> print(sess.run(hello))
Hello, TensorFlow!
>>> a = tf.constant(10)
>>> b = tf.constant(32)
>>> print(sess.run(a + b))
42
It worked
After all see the head family https://www.tensorflow.org/versions/master/tutorials/mnist/beginners/index.html
Machine learning is performed using a data set of the above handwritten numerical images called MNIST. This is positioned as a confirmation of basic operation like Hello World of programming.
First of all, I get the data, but it seems that I can already get it automatically by typing the following.
>>> from tensorflow.examples.tutorials.mnist import input_data
>>> mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes. Extracting MNIST_data/train-images-idx3-ubyte.gz Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes. Extracting MNIST_data/train-labels-idx1-ubyte.gz Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes. Extracting MNIST_data/t10k-images-idx3-ubyte.gz Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes. Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
What kind of data is, for example, like this train-images-idx3-ubyte.gz: training set images (9912422 bytes)
0000 0803 0000 ea60 0000 001c 0000 001c 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0312 1212 7e88 af1a
So, I will actually try it, but I will skip the explanation of the calculation method inside because it is heavy.
>>> import tensorflow as tf
>>> x = tf.placeholder(tf.float32, [None, 784])
>>> W = tf.Variable(tf.zeros([784, 10]))
>>> b = tf.Variable(tf.zeros([10]))
>>> y = tf.nn.softmax(tf.matmul(x, W) + b)
First, I'm making a box to put it in. Since the image is 28x28 = 784 pixels, we need a 784-dimensional vector, and since it is a number, it is a box for 10-dimensional = 10 features from 0 to 9.
The formula is y = softmax (w * x + b)
I will make a training part from here
>>> y_ = tf.placeholder(tf.float32, [None, 10])
>>> cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y),reduction_indices=[1]))
>>> train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
For learning, we have to make a definition of what is good and what is bad. Here, we will train using the cost function "cross entropy". The correct data box y_ is also created here. Let's calculate the cross-entropy, and here we use the gradient descent algorithm with a learning rate of cross_entropy 0.5 to find the TensorFlow to minimize. Finally initialize with the session
>>> init = tf.initialize_all_variables()
>>> sess = tf.Session()
>>> sess.run(init)
Now that you're ready, let's! Training ... 1000 steps
for i in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
Get a "batch" of 100 random data points from the MNIST training set and perform the steps
Evaluate the model. First, make a calculation set
>>> correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
>>> accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Run
>>> print(sess.run(accuracy, feed_dict={x: mnist.test.images, y_: mnist.test.labels}))
0.9188
It seems that the correct answer rate is 90%. However, this is not high in accuracy, and with fine adjustment, it can be raised to 99.7% ... Magica w
There seems to be a detailed explanation here as well https://drive.google.com/file/d/0B04ol8GVySUubjVsUDdXc0hla00/view
I will do various things to improve the accuracy. Continue to Part 2 http://qiita.com/northriver/items/4f4690053e1770311335
Recommended Posts