First of all, I would like to introduce myself. I am a 4th year student attending a science university in Tokyo. I will go on to graduate school, so the next 2 years will be a moratorium. I'm looking for an internship at a company such as machine learning lol, if you have a HR department, please contact me lol I often hear the word machine learning / AI, which I often hear in the world, in my laboratory, but I think I understand the theory a little, "CNN? RNN? Convolutional? Boltzmann machine? Normalization?" I've never done it before, so I think I'll try to make an AI based on my favorite "Nogisaka 46 / Keyakizaka 46". My experience value is almost 0, so I hope you can see it as a beginner's graffiti. I would like to write deeply about idols at the end lol
If you do not set a goal for what to make, it will not start, so set a goal. In my interpretation, machine learning is a selfish understanding of a function that returns "some stochastic labeling" from a "set of data". So, when I thought about creating an AI based on Nogizaka / Keyakizaka, I came up with the idea of "AI that automatically classifies and saves the members in the member's photo" lol I thought it would be nice to have an AI that would determine who is singing in which part when a song is inserted, but since I haven't understood voice recognition yet, I'd like to try it after image identification is complete. think.
Now that we have set the goal of creating an AI that automatically classifies and saves the members in the member's photo, I would like to choose the means of how to achieve that goal. There are many frameworks such as TensorFlow, Chainer, and Caffe that I often hear in the field of machine learning, but this time I would like to make similar AI with tensorflow and Chainer and compare them. I heard that Caffe is installed as a demon, so I avoided Caffe lol
So I would like to install Python, tensorflow, Chainer on my PC. Many people have posted articles about this, so I will omit it this time. If you read these for reference, you may be able to install the above three on your PC. -Install TensorFlow from Python environment construction ・ [Building an environment for learning "machine learning" using Python on Mac] (http://qiita.com/yoshizaki_kkgk/items/4663148a2b3ca078ddbc) -TensorFlow (1.0.0) development environment construction method (Mac) On my PC, I installed Python on my Mac using Homebrew and Python using pyenv, and installed tensorflow, Chainer, and anaconda (which put various modules such as numpy in a batch). If you want to introduce this from the first clean state, throw the above word to google and I think that there is a more detailed article, so please refer to that and introduce it.
I was able to install it like this.
Since I introduced TensorFlow and Chainer, I will demonstrate it including confirmation that it is actually working properly with MNIST numerical data. MNIST is almost always the first thing that appears in the field of machine learning, and is a dataset of handwritten numbers. Throw these into frameworks such as TensorFlow and Chainer to see if they can be learned and identified. There is almost no difficult task like Hello World when learning in other languages, so I would like to do it (it is difficult to understand the theory) The actual code looks like this:
tf_mnist.py
import argparse
import sys
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
FLAGS = None
def main(_):
#Data reading
mnist = input_data.read_data_sets(FLAGS.data_dir, one_hot=True)
#Declare inputs, weights, biases, and outputs. 784=28*1 pixel 1 bit input at 28
x = tf.placeholder(tf.float32, [None, 784])
W = tf.Variable(tf.zeros([784, 10]))
b = tf.Variable(tf.zeros([10]))
y = tf.matmul(x, W) + b
#Variable y for entering the answer when calculating the cross entropy_Declare
y_ = tf.placeholder(tf.float32, [None, 10])
#Calculate cross entropy
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y))
train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
#Learning
for _ in range(1000):
batch_xs, batch_ys = mnist.train.next_batch(100)
sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
#Calculate the correct answer rate between the actual answer and the learning model
correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(sess.run(accuracy, feed_dict={x: mnist.test.images,
y_: mnist.test.labels}))
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('--data_dir', type=str, default='/tmp/tensorflow/mnist/input_data',
help='Directory for storing input data')
FLAGS, unparsed = parser.parse_known_args()
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
ch_mnist.py
from __future__ import print_function
import argparse
import chainer
import chainer.functions as F
import chainer.links as L
from chainer import training
from chainer.training import extensions
#Define neural network
class MLP(chainer.Chain):
def __init__(self, n_units, n_out):
super(MLP, self).__init__()
with self.init_scope():
# the size of the inputs to each layer will be inferred
self.l1 = L.Linear(None, n_units) # n_in -> n_units
self.l2 = L.Linear(None, n_units) # n_units -> n_units
self.l3 = L.Linear(None, n_out) # n_units -> n_out
def __call__(self, x):
h1 = F.relu(self.l1(x))
h2 = F.relu(self.l2(h1))
return self.l3(h2)
def main():
#Prescribe settings on the command line
parser = argparse.ArgumentParser(description='Chainer example: MNIST')
parser.add_argument('--batchsize', '-b', type=int, default=100,
help='Number of images in each mini-batch')
parser.add_argument('--epoch', '-e', type=int, default=20,
help='Number of sweeps over the dataset to train')
parser.add_argument('--frequency', '-f', type=int, default=-1,
help='Frequency of taking a snapshot')
parser.add_argument('--gpu', '-g', type=int, default=-1,
help='GPU ID (negative value indicates CPU)')
parser.add_argument('--out', '-o', default='result',
help='Directory to output the result')
parser.add_argument('--resume', '-r', default='',
help='Resume the training from snapshot')
parser.add_argument('--unit', '-u', type=int, default=1000,
help='Number of units')
args = parser.parse_args()
print('GPU: {}'.format(args.gpu))
print('# unit: {}'.format(args.unit))
print('# Minibatch-size: {}'.format(args.batchsize))
print('# epoch: {}'.format(args.epoch))
print('')
# Set up a neural network to train
# Classifier reports softmax cross entropy loss and accuracy at every
# iteration, which will be used by the PrintReport extension below.
model = L.Classifier(MLP(args.unit, 10))
if args.gpu >= 0:
# Make a specified GPU current
chainer.cuda.get_device_from_id(args.gpu).use()
model.to_gpu() # Copy the model to the GPU
# Setup an optimizer
optimizer = chainer.optimizers.Adam()
optimizer.setup(model)
#Data initialization train and test data prepared
train, test = chainer.datasets.get_mnist()
#If you put the data here with an iterator, it will proceed with learning
train_iter = chainer.iterators.SerialIterator(train, args.batchsize)
test_iter = chainer.iterators.SerialIterator(test, args.batchsize,
repeat=False, shuffle=False)
# Set up a trainer
updater = training.StandardUpdater(train_iter, optimizer, device=args.gpu)
trainer = training.Trainer(updater, (args.epoch, 'epoch'), out=args.out)
# Evaluate the model with the test dataset for each epoch
trainer.extend(extensions.Evaluator(test_iter, model, device=args.gpu))
# Dump a computational graph from 'loss' variable at the first iteration
# The "main" refers to the target link of the "main" optimizer.
trainer.extend(extensions.dump_graph('main/loss'))
# Take a snapshot for each specified epoch
frequency = args.epoch if args.frequency == -1 else max(1, args.frequency)
trainer.extend(extensions.snapshot(), trigger=(frequency, 'epoch'))
# Write a log of evaluation statistics for each epoch
trainer.extend(extensions.LogReport())
# Save two plot images to the result dir
if extensions.PlotReport.available():
trainer.extend(
extensions.PlotReport(['main/loss', 'validation/main/loss'],
'epoch', file_name='loss.png'))
trainer.extend(
extensions.PlotReport(
['main/accuracy', 'validation/main/accuracy'],
'epoch', file_name='accuracy.png'))
# Print selected entries of the log to stdout
# Here "main" refers to the target link of the "main" optimizer again, and
# "validation" refers to the default name of the Evaluator extension.
# Entries other than 'epoch' are reported by the Classifier link, called by
# either the updater or the evaluator.
trainer.extend(extensions.PrintReport(
['epoch', 'main/loss', 'validation/main/loss',
'main/accuracy', 'validation/main/accuracy', 'elapsed_time']))
# Print a progress bar to stdout
trainer.extend(extensions.ProgressBar())
if args.resume:
# Resume from a snapshot
chainer.serializers.load_npz(args.resume, trainer)
# Run the training
trainer.run()
if __name__ == '__main__':
main()
This is the one in the tutorial of TensorFlow, Chainer, and if you ask Google teacher, it will fall immediately lol The result was output as follows, and it was confirmed that both TensorFlow and Chainer were able to learn handwritten numbers firmly.
Next time, I would like to post about data set collection for actual image recognition. I hope you can read it.
It is no exaggeration to say that I posted this time because I wanted to write this lol I'm a dollar otaku chasing Nogizaka46 and Keyakizaka46, but the recommended men are Manatsu Akimoto, Miria Watanabe and __Shiho Kato __ (1 recommended). Please contact me if you have a chance to meet with all the live performances lol ← I've participated in quite a lot of live performances lol, but I'll write more about this next time, so I'll omit it this time. I haven't shaken hands at all w Is this about the self-introduction about the slope? .. .. The events scheduled to participate in the war are all Tsu Niigata and Tokyo Dome (planned). Recently, the momentum of the 3rd gen members is so great that I can't get a ticket at all. I have removed AiiA, which is the third term alone, and the princess who killed me, so please do various things with the management. (Such myself is also firmly taking the 3rd term individual grip lol) I'm planning to pile up Shiho Kato's handshake ticket on Keyaki's 5th, so please recommend Toshi-chan. Qiita's operation is strict censorship, but if this article is deleted, I will post it again lol
Recommended Posts