Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I’m having trouble wrapping my head around how optimisers like ADAM and Momentum differ from second-order optimization methods.
The latter involves calculating/approximating the Hessian however the momentum based optimisers adjust their gradients from past steps (which is quite similar to how higher order derivatives work).
I know that mathematically and implementation-wise these two methods are different however can anyone provide any intuition as to how they differ in practice – perhaps by giving an example of where you would expect wildly different results from these two types of optimisers.
Thanks 🙂
submitted by /u/mellow54
[link] [comments]