Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
We try improvements to the existing ‘Hypergradient’ based optimizers proposed in the paper Online Learning Rate Adaptation with Hypergradient Descent.
We expect that the hypergradient based learning rate update could be more accurate and aim to exploit the gains much better by boosting the learning rate updates with momentum and adaptive gradients, experimenting with
alongside the model optimizers SGD, SGD with Nesterov(SGDN) and Adam.
The new optimizers are compared with their resepective hypergradient-descent baselines and provide advantages such as better generalization and faster convergence for the loss function. The code and the results of our experiments are available at https://github.com/harshalmittal4/Hypergradient_variants.
submitted by /u/harshalmittal4
[link] [comments]