Quick Introduction of Batch Normalization Hung-yi Lee李宏毅 1
Quick Introduction of Batch Normalization Hung-yi Lee 李宏毅 1
Changing Landscape W2 Loss L smooth W +△y small W1+△W1 e X1 1,2 +△ small mall b L= W2 +△L 1 X2 small 2
Changing Landscape 1 + 1, 2 …… w1 w2 Loss L 𝑦 ො 𝑦 𝑒 𝑏 𝑤1 𝑤2 𝐿 = 𝑒 small 𝑥1 𝑥2 +∆𝑤1 +∆y +∆e +∆𝐿 small smooth small small 2
Changing Landscape Loss L Loss L smooth +△y large W1 y← X1 e 1,2 +△e small same large b L= W2 range △W2 X2 100,200 +△L .s. large large
Changing Landscape 1 + 1, 2 …… 100, 200 …… w1 w2 Loss L w1 w2 Loss L 𝑦 ො 𝑦 𝑒 𝑏 𝑤1 𝑤2 𝐿 = 𝑒 small large 𝑥1 𝑥2 +∆𝑤2 +∆y +∆e +∆𝐿 large smooth steep same range large large 3
Feature normalization 3 x For each x dimension i: mean:mi : standard deviation:oi x{-mi The means of all dims are 0, ← Oi and the variances are all 1 In general,feature normalization makes gradient descent converge faster. 4
Feature Normalization ……………… …… …… …… …… 𝒙 𝟏 𝒙 𝟐 𝒙 𝟑 𝒙 𝒓 𝒙 𝑹 mean: 𝑚𝑖 standard deviation: 𝜎𝑖 𝒙𝑖 𝒓 ← 𝒙𝑖 𝒓 − 𝑚𝑖 𝜎𝑖 The means of all dims are 0, and the variances are all 1 For each dimension 𝑖: 𝒙1 𝟏 𝒙2 𝟏 𝒙1 𝟐 𝒙2 𝟐 In general, feature normalization makes gradient descent converge faster. 4
Considering Deep Learning Different dims have different ranges. Wi igmoid W2 元2 W1 Sigmoid W2 。。。。e 3 W1 63 Sigmoid W2 Also difficult to optimize Feature Also need Normalization normalization
𝒂 𝟑 𝒂 𝟐 𝑎 𝑊1 1 𝑊1 𝑊1 𝒛 𝟏 𝒛 𝟐 𝒛 𝟑 𝑊2 𝑊2 𝑊2 Sigmoid …… …… …… Sigmoid Sigmoid Feature Normalization 𝒙 𝟏 𝒙 𝟐 𝒙 𝟑 Also need normalization Different dims have different ranges. Also difficult to optimize Considering Deep Learning 5