[D] Why do effective activation functions have a bounded derivative?
Is there a reason why almost every modern activation function in deep learning has a bounded derivative? ReLU, Swish, tanh, sigmoid and other activation functions mentioned here all have bounded derivatives.
My intuition says it is because we use backprop to train our networks. A bounded derivative should restrict the amount of gradient flow during the backward phase, preventing a blowup of gradients. What do you guys think?
submitted by /u/TheSilenceOfTheBakra
[link] [comments]