Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] State of the art optimizers

I wasn’t sure if there is a consensus on this. Of course, there is widespread use of SGD with momentum, Adam, RMSProp, Adagrad, Adadelta, and probably others — but is there an optimizer that is considered SOTA for DNNs “most of the time”? Or is it basically accepted that there is a collection of “good” optimizers whose efficacy varies depending on the task and architecture?

submitted by /u/doctorjuice
[link] [comments]