Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[P] higher. A PyTorch library to do gradient-based hyperparameter optimization and meta-learning without changing models/optimizers

I wanted to share with you this really new project I just stumbled upon from facebook AI research.

Implementing gradient-based hyper-parameter optimization and meta-learning has always been hard because of the non-differentiable optimizers and the stateful, non functional models. This library is supposed to make things easier by replacing existing stateful models with stateless ones automatically at run-time. It also implement differentiable version for most of the the torch.optim optimizers (although you cant use third party ones out of the box).

This means that we can finally differentiate through the usual training loop code with very little changes!

I didn’t try the library myself but it seems really easy to implement and from the stars looks really promising. Let me know what you think.

repo: https://github.com/facebookresearch/higher

submitted by /u/rikkajounin
[link] [comments]