Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I was wondering if the cost functions of neural network are non-convex, why are we still using stochastic gradient methods such as SGD or Adam?
Does Global Optimization Solvers work? I would want to try BARON.
Does anyone has experience or know any reason why global optimizers are not being used?
Maybe hardware is not good enough? But with all the google TPU and quantum computer being developed, I doubt this is the case.