Learn About Our Meetup

4500+ Members

[R]: Painless Stochastic Gradient: Interpolation, Line-Search, and Convergence Rates

The authors use a classic Armijo line-search approach in the context of SGD to automatically tune the line search parameter in training the neural networks. They’re also able to prove convergence results on minimizing convex and non-convex objective functions satisfying certain growth conditions. An aside, but as an optimization-head myself, it’s nice to see some of the traditional optimization ideas make their way into an ML context.

submitted by /u/sinsecticide
[link] [comments]

Next Meetup




Plug yourself into AI and don't miss a beat


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.