[P] learn2learn: A PyTorch Meta Learning Library
We are pleased to share with you our meta-learning library, that started as a project at the PyTorch hackathon.
learn2learn is a PyTorch library for all things meta-learning. Our goal is to support as many meta-learning algorithms as possible (be it few-shots, meta-descent, or meta-RL) and to enable researchers to develop better methods and easily compare against existing literature.
Our current features include:
- Modular API: implement your own training loops with our low-level utilities.
- Provides various meta-learning algorithms (e.g. MAML, FOMAML, MetaSGD, ProtoNets, DiCE)
- Task generator with unified API, compatible with torchvision, torchtext, torchaudio, and cherry
- Provides standardized meta-learning tasks for vision (Omniglot, mini-ImageNet), reinforcement learning (Particles, Mujoco), and even text (news classification).
- 100% compatible with PyTorch — use your own modules, datasets, or libraries!
If this is of interest to you, have a look at the following links:
- GitHub: learnables/learn2learn
- Examples: learn2learn/examples
- Documentation: learn2learn.net
- Slack: slack.learn2learn.net
Let us know what you think and how we can help you in your research!
PS: learn2learn was also accepted as a poster to the PyTorch Dev Conference, so you’ll know all about it there!