Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
It’s something I’ve heard here and there but never really got an explanation.
From online, I found this and this
Optimizers that build upon Adagrad aim to fix the vanishing learning rate problem, so why would they do worse?
Perhaps minimas are really unstable, and would benefit from the smaller learning rates. Could this issue then be alleviated by increasing the window of past gradients?
submitted by /u/Research2Vec
[link] [comments]