Deep Learning Specialization (Coursera) Self-study record (C4W2)

Introduction

This is the content of Course 4, Week 2 (C4W2) of Deep Learning Specialization.

(C4W2L01) Why look at case studies?

Contents

--This week's outline - Classic Networks - LeNet-5 - AlexNet - VGG - ResNet (152 layers) - Inception

(C4W2L02) Classic Networks

Contents

LeNet-5 (1998)

  1. Input ; 32 x 32 x 1
  2. CONV (5x5, s=1) ; 28 x 28 x 1
  3. Avg POOL (f=2, s=2) ; 14 x 14 x 6
  4. CONV (5x5, s=1) ; 10 x 10 x 16
  5. Avg POOL (f=2, s=2) ; 5 x 5 x 6
  6. FC; 120 parameters
  7. FC; 84 parameters
  8. \hat{y}

--Number of parameters; 60k -$ n_H $, $ n_W $ become smaller and $ n_C $ becomes larger --Typical networks such as CONV, POOL, CONV, POOL, FC, FC

AlexNet (2012)

  1. Input ; 227x227x3
  2. CONV (11x11, s=4) ; 55 x 55 x 96
  3. Max POOL (3x3, s=2) ; 27 x 27 x 96
  4. CONV (5x5, same) ; 27 x 27 x 256
  5. Max POOL (3x3, s=2) ; 13 x 13 x 256
  6. CONV (3x3, same) ; 13 x 13 x 384
  7. CONV (3x3, same) ; 13 x 13 x 384
  8. CONV (3x3, same) ; 13 x 13 x 256
  9. Max POOL (3x3, s=2) ; 6 x 6 x 256
  10. FC; 4096 parameters
  11. FC; 4096 parameters
  12. softmax; 1000 parameters

VGG-16

  1. Input ; 224 x 224 x 3
  2. CONV64 x 2 ; 224 x 224 x 64
  3. POOL ; 112 x 112 x 64
  4. CONV128 x 2 ; 112 x 112 x 128
  5. POOL ; 56 x 56 x 128
  6. CONV256 x 3; 56 x 56 x 256
  7. POOL ; 28 x 28 x 256
  8. CONV512 x 3; 28 x 28 x 512
  9. POOL ; 14 x 14 x 512
  10. CONV512 x 3 ; 14 x 14 x 512
  11. POOL ; 7 x 7 x 512
  12. FC; 4096 parameters
  13. FC; 4096 parameters
  14. Softmax; 1000 parameters

--Number of parameters; $ \ sim $ 138M --Relative structural uniformity

Impressions

――The speed of the times when 2015 is expressed as classic

(C4W2L03) Residual Networks (ResNet)

Contents

--The deeper the layer, the larger the training error in a normal network. ――But with ResNet, training errors decrease even if it exceeds 100 layers. Effective for learning deep networks

(C4W2L04) Why ResNets works

Contents

--If $ W ^ {[l + 2]} = 0 $, $ b ^ {[l + 2]} = 0 $, then $ a ^ {[l + 2]} = g (a ^ {[l ]}) = a ^ {[l]} $ --Identity function is easy for Residual block to learn (Residual block makes it easy to learn an identity function)

(C4W2L05) Network in network and 1x1 convolutions

Contents

(C4W2L06) Inception network motivation

Contents

--Apply the following to Input (28x28x192) and combine them. - 1x1 → Output ; 28x28x64 - 3x3 → Output ; 28x28x128 - 5x5 → Output ; 28x28x32 - Max POOL → Output ; 28x28x32 --Total 28x28x256 --Apply all the different filter sizes and pooling and let the network choose the right one

(C4W2L07) Inception network

Contents

--Description of Inception network with bottle neck layer --Called GoogLeNet

(C4W2L08) Using open-source implementation

Contents

--Description of downloading the source code from GitHub (`` `git clone```)

(C4W2L09) Transfer Learning

Contents

-$ x $ → layer → layer → $ \ cdots $ → layer → softmax → $ \ hat {y} $ --When there is little data, train only softmax (other parameters are fixed) --For large datasets, for example, the latter half of the layer is trained and the first half of the layer is fixed. --Learning the entire network when there is very large data

(C4W2L10) Data augumentation

Contents

(C4W2L11) The state of computer vision

Contents

--The current amount of data is sensuously speech recognition $ \ gt $ image recognition $ \ gt $ object detection (recognizes where the object is) --If you have a lot of data, you can use a simple algorithm with less hand-engineering. --Hand-engineering or hack increases when data is low

reference

--This week's exercise is the implementation of ResNet

reference

-Deep Learning Specialization (Coursera) Self-study record (table of contents)

Recommended Posts

Deep Learning Specialization (Coursera) Self-study record (C3W1)
Deep Learning Specialization (Coursera) Self-study record (C1W3)
Deep Learning Specialization (Coursera) Self-study record (C1W4)
Deep Learning Specialization (Coursera) Self-study record (C2W1)
Deep Learning Specialization (Coursera) Self-study record (C1W2)
Deep Learning Specialization (Coursera) Self-study record (C3W2)
Deep Learning Specialization (Coursera) Self-study record (C2W2)
Deep Learning Specialization (Coursera) Self-study record (C4W1)
Deep Learning Specialization (Coursera) Self-study record (C2W3)
Deep Learning Specialization (Coursera) Self-study record (C4W2)
Learning record
Learning record # 3
Learning record # 1
Deep Learning
Learning record of reading "Deep Learning from scratch"
"Deep Learning from scratch" Self-study memo (Part 12) Deep learning
"Deep Learning from scratch" self-study memo (unreadable glossary)
"Deep Learning from scratch" Self-study memo (9) MultiLayerNet class
Deep Learning Memorandum
Start Deep learning
Python Deep Learning
Deep learning × Python
"Deep Learning from scratch" Self-study memo (No. 11) CNN
"Deep Learning from scratch" Self-study memo (No. 19) Data Augmentation
Learning record so far
First Deep Learning ~ Struggle ~
Python: Deep Learning Practices
Go language learning record
Deep learning / activation functions
Deep Learning from scratch
Learning record 4 (8th day)
Learning record 9 (13th day)
Learning record 3 (7th day)
Deep learning 1 Practice of deep learning
Deep learning / cross entropy
Learning record 5 (9th day)
Learning record 6 (10th day)
First Deep Learning ~ Preparation ~
Programming learning record day 2
First Deep Learning ~ Solution ~
[AI] Deep Metric Learning
Learning record 1 (4th day)
Learning record 7 (11th day)
I tried deep learning
Python: Deep Learning Tuning
Learning record 2 (6th day)
Deep learning large-scale technology
Linux learning record ① Plan
Learning record 16 (20th day)
Deep learning / softmax function
"Deep Learning from scratch" self-study memo (No. 18) One! Meow! Grad-CAM!
"Deep Learning from scratch" self-study memo (No. 19-2) Data Augmentation continued
"Deep Learning from scratch" self-study memo (No. 15) TensorFlow beginner tutorial