Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[Project] Implementing Improvements to Hypergradient optimizers

We try improvements to the existing ‘Hypergradient’ based optimizers proposed in the paper Online Learning Rate Adaptation with Hypergradient Descent.

We expect that the hypergradient based learning rate update could be more accurate and aim to exploit the gains much better by boosting the learning rate updates with momentum and adaptive gradients, experimenting with

  1. Hypergradient descent with momentum, and
  2. Adam with Hypergradient,

alongside the model optimizers SGD, SGD with Nesterov(SGDN) and Adam.

The new optimizers are compared with their resepective hypergradient-descent baselines and provide advantages such as better generalization and faster convergence for the loss function. The code and the results of our experiments are available at https://github.com/harshalmittal4/Hypergradient_variants.

submitted by /u/harshalmittal4
[link] [comments]