[P] higher. A PyTorch library to do gradient-based hyperparameter optimization and meta-learning without changing models/optimizers
I wanted to share with you this really new project I just stumbled upon from facebook AI research.
Implementing gradient-based hyper-parameter optimization and meta-learning has always been hard because of the non-differentiable optimizers and the stateful, non functional models. This library is supposed to make things easier by replacing existing stateful models with stateless ones automatically at run-time. It also implement differentiable version for most of the the torch.optim optimizers (although you cant use third party ones out of the box).
This means that we can finally differentiate through the usual training loop code with very little changes!
I didn’t try the library myself but it seems really easy to implement and from the stars looks really promising. Let me know what you think.