Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I wasn’t sure if there is a consensus on this. Of course, there is widespread use of SGD with momentum, Adam, RMSProp, Adagrad, Adadelta, and probably others — but is there an optimizer that is considered SOTA for DNNs “most of the time”? Or is it basically accepted that there is a collection of “good” optimizers whose efficacy varies depending on the task and architecture?
submitted by /u/doctorjuice
[link] [comments]