tldr I received a Deep Learning Nanodegree from Udacity. Tensorflow is hard. Tuning of Hyper Parameter is difficult. But I learned a lot.
Udacity is a so-called MOOC (Massive Open Online Course) where you can learn programming and design as an online course. Udacity itself is free, but it offers a paid mini-degree (Nanodegree) that combines several courses, and some courses can only be seen there.
There are other online courses such as Lynda.com and Code School, but you are required to submit assignments in addition to watching the video, and you will not pass unless you meet the required standards. The difference from the same type of Coursera is that Coursera is more like a university lecture, while Udacity is more like a corporate training.
Deep Learning Nanodegree is a new Nanodegree that started in 2017.
We specialize in deep learning, and if you finish successfully, you will be admitted to autonomous driving and Artificial Intelligence Nanodegree. According to Techcrunch Articles, only 16% and 4.5% are admitted, respectively. It's worth it because it's said that it is, but in the automatic driving course, if you insist that you can do Python by yourself as a mechanical graduate, you passed normally, so if you are an engineering graduate, you may be able to go straight to the course you are looking for. not.
Note that Deep Learnig Nanodegree does not have non-deep machine learning like SVM and Decision Tree, which are found in autonomous driving courses. On the contrary, this is the only RNN system (at least as far as I confirmed up to Term 1). If it is Machine Learning Engineer Nanodegree, will both be included?
By the way, in this course, when you open the Udacity page, youtuber's Siraj will teach you in a big way.
However, when I actually tried it, I found that the main instructor was Mr. Mat, who is next to me. There are some preliminary explanation parts before the project of the assignment, but the instructor there is also Mr. Mat. For the time being, the Siraj course and the Mat course are arranged alternately, but since the Siraj part is an independent reprint of Youtube, I got the impression that it was in charge of introduction to lower the threshold.
In the GAN part, Ian Goodfelow, the author of the GAN paper, will explain GAN himself. It's amazing.
Now, the actual lecture involves work on the Jupyter Notebook in addition to the explanation videos and texts on the Web. All the code is [published] on Github (https://github.com/udacity/deep-learning) and the assignment will be to submit a filled Jupyter Notebook.
Due to the nature of deep learning, GPU is required, so you can get a free tier for AWS and Floyd Hub, but for the convenience of running Jupyter Notebook as a server, anyone can see it without IP restrictions, so it is safe to have a GPU machine at hand. maybe.
Since the framework is Tensorflow, it is often required to be implemented at a lower layer than Chainer and Keras. It can be said that it will give you that much power, but you will be required to commit a reasonable amount of time. There are Slack channels and Forums, but even if you're on schedule, you're said to be in the Top 20% at the end of Project 4, so the frustration rate may be high.
Below are the contents that you can actually learn.
Neural Networks Instructions on how to use Anaconda and Jupyter Notebook. The challenge is to build a Neural Network and use numpy to implement a sigmoid function-based neural network.
Convolutional Networks Explanation of Sentiment Analysis and TF Learn, implementation of error back propagation method, explanation of CNN. The challenge is image classification. There is also a description of Dopout.
Recurrent Neural Networks RNN, Word2vec, hyperparameters, TensorBoard, initialization, transfer learning, seq2seq, reinforcement learning (just touch). The challenge is the generation of Simpsons conversations and machine translation.
Personally, the most interesting part was the initialization part. It was not good to compare the final accuracy due to the difference in range ([0,1] or [-1,1]) and the difference in distribution (uniform, truncated_normal).
During the lecture, there were announcements of Tensorflow 1.0 and Keras 2.0. Even in the latter half of the lecture, the sample version was Tensorflow 1.0 and the response was very quick.
Since the field of machine learning is evolving rapidly, information tends to be out of date in paper books. The real thrill of the online course is that it can be updated continuously, so I felt that it was a good fit.
Recommended Posts