In this article, we will introduce 19 themes, "What kind of AI-related research is important now" from the 2020 review article on the Google AI Blog.
I would also like to introduce four studies that have personally shocked me.
It is based on the following Google blog post.
「Google Research: Looking Back at 2020, and Forward to 2021」 https://ai.googleblog.com/2021/01/google-research-looking-back-at-2020.html (Issued January 12, 2021)
From each headline of the above article, we will list the important themes of AI research at Google.
In the reference article, a digest of the paper published in 20 years is introduced for each theme. Please see Original English Blog for the theme you are interested in.
From the above blog post, I will pick up the research that shocked me personally.
Of course, all of them are important researches that have an impact, but among them,
I was personally shocked by the four. I will briefly introduce each research.
AutoML-Zero
AutoML-Zero: Evolving Code that Learns (July 2020)
Is a new method of NAS (Neural Architecture Search) in the field of AutoML.
In the past NAS, the candidates for the network shape to be searched as AutoML were rule-based to some extent manually (for example, the kernel size is [A, B, C], etc.).
In short, AutoML so far has said that it "automatically creates the optimum deep learning model". Close to an extension of hyperparameter tuning, However, the search space was too large, so I used reinforcement learning and other optimization methods.
AutoML-Zero does not utilize such manual deep learning rules (knowledge) at all, and gives only simple operations.
From there, we evolved the model with a genetic algorithm and automatically discovered gradient descent and ReLU.
This suggests that human research results (such as the discovery of ReLU) can be automatically discovered by genetic algorithms, and computers automatically discover better methods and deep learning frameworks than human research. It suggests the possibility of doing so.
In the figure below, the horizontal axis shows time (development of genetic algorithm), the vertical axis shows the accuracy rate, and the timing when AutoML-Zero discovered the method that human beings discovered on the way.
(The figure is quoted from the reference article)
Lookout:On-device Supermarket Product Recognition
On-device Supermarket Product Recognition(Lookout) (August 2020)
For various objects, you can detect the object and display character data etc. as AR (Augmented Reality) there.
This is already an android app,
Lookout by Google https://play.google.com/store/apps/details?id=com.google.android.apps.accessibility.reveal
It is open to the public as available.
(The figure is quoted from the reference article)
Speaking of common research, it is good that it is properly made into an application and easy to use.
A blog post (On-device Supermarket Product Recognition (Lookout)) also introduces Lookout's architecture and techniques.
(The figure is quoted from the reference article)
Agile and Intelligent Locomotion via Deep Reinforcement Learning Agile and Intelligent Locomotion via Deep Reinforcement Learning (May 2020)
This is a proposal for a new method of deep reinforcement learning.
It is a method that greatly exceeds the performance of Soft Actor-Critic (SAC), which was an excellent method by Google itself until then.
With SAC, it is now possible to learn quadrupedal walking, which required 1 hour of actual robot operation data, in less than 5 minutes.
(The figure is quoted from the reference article)
Techniques are based on Hierarchical Reinforcement Learning and metal learning.
The original implementation of the algorithm seems to be a little difficult, and it is not yet included in major libraries for reinforcement learning (such as Ray RLLib).
Chip Design with Deep Reinforcement Learning Chip Design with Deep Reinforcement Learning (April 2020)
I think many people have the question, "Can deep reinforcement learning be used in actual business?"
I think this is still a long way to go, but this research is a study in which the arrangement of elements on a chip such as a CPU is optimized by deep reinforcement learning and a graph neural network.
Arrange the elements on the chip so that the total extension distance of the wiring is as short as possible.
(The figure is quoted from the reference article)
Google itself in the article
The system is able to generate placements that usually outperform those of human chip design experts, and we have been using this system (running on TPUs) to do placement and layout for major portions of future generations of TPUs.
He said that he was using this technology for the element layout of the next-generation TPU, and introduced that he was using it in his business.
I haven't been able to successfully verbalize the cases where deep reinforcement learning is more suitable than genetic algorithms in a generating system that is close to optimization.
I feel that the business application of deep reinforcement learning is likely to increase more and more in 21 years.
This is the introduction of the important theme collection (Google version) of AI research.
I would like to check Google's AI research for 21 years and post it on Twitter.
[Reference] ●Google Research: Looking Back at 2020, and Forward to 2021 https://ai.googleblog.com/2021/01/google-research-looking-back-at-2020.html
●AutoML-Zero: Evolving Code that Learns https://ai.googleblog.com/2020/07/automl-zero-evolving-code-that-learns.html
●On-device Supermarket Product Recognition(Lookout) https://ai.googleblog.com/2020/07/on-device-supermarket-product.html
●Agile and Intelligent Locomotion via Deep Reinforcement Learning https://ai.googleblog.com/2020/05/agile-and-intelligent-locomotion-via.html
●Soft Actor-Critic: Deep Reinforcement Learning for Robotics https://ai.googleblog.com/2019/01/soft-actor-critic-deep-reinforcement.html
●Chip Design with Deep Reinforcement Learning https://ai.googleblog.com/2020/04/chip-design-with-deep-reinforcement.html
** [Article author] ** Information Services International-Dentsu (ISID) AI Transformation Center Product Development Gr Yutaro Ogawa Main book "Learn while making! Deep learning by PyTorch" (Details of self-introduction)
【Transmission of information】 ** Twitter account **: Yutaro Ogawa @ISID_AI_team
We publish articles and sites on Twitter that we find interesting in IT / AI and business / management information.
[Recruitment information] ・ New graduate recruitment (22 years) Data Scientist
·Mid-career recruitment AI Engineer / Consultant AI Architect List of mid-career hires
[Disclaimer] The content of this article itself is the opinion/transmission of the author, not the official opinion of the company to which the author belongs.
Recommended Posts