This is the content of Course 1, Week 4 (C1W4) of Deep Learning Specialization.
(C1W4L01) Deep L-layer Neural Network
--Deep neural network symbol description
(C1W4L02) Forward Propagation in a Deep Network
z^{[l]} = W^{[l]} a^{[l-1]} + b^{[l]} \\
a^{[l]} = g^{[l]}\left(z^{[l]}\right)
--Vectorized
Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]} \\
A^{[l]} = g^{[l]}\left(Z^{[l]}\right)
(C1W4L03) Getting your matrix dimensions right
--In order to remove the bug, check if the dimensions of the matrix are correct.
(C1W4L04) Why deep representation?
--A deep neural network can express more complicatedly with fewer units than a shallow neural network.
(C1W4L05) Building Blocks of a Deep Neural Network
--Explanation of Forward propagation and Back propagation using block diagrams
(C1W4L06) Forward and backward propagation
--Expand the block diagram to explain the overall picture of Forward propagation and Backward propagation
Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]} \\
A^{[l]} = g^{[l]}\left(Z^{[l]}\right)
dZ^{[l]} = dA^{[l]} \ast g^{[l]\prime}\left(Z^{[l]}\right) \\
dW^{[l]} = \frac{1}{m}dZ^{[l]}A^{[l-1]T} \\
db^{[l]} = \frac{1}{m}\textrm{np.sum}\left(dZ^{[l]}\textrm{, axis=1, keepdims=True}\right) \\
dA^{[l-1]} = W^{[l]T}dZ^{[l]}
(C1W4L07) Parameters vs Hyperparameters
(C1W4L08) What does this have to do with the brain
--About the relationship between deep learning and the brain —— Neurons are similar to logistic regression, but we don't use this metaphor very often
-Deep Learning Specialization (Coursera) Self-study record (table of contents)
Recommended Posts