[R] Understanding the Neural Tangent Kernel
I’ve put up a new blog post, Understanding the Neural Tangent Kernel, that aims to distill the ideas behind the neural tangent kernel that is making waves in recent theoretical deep learning research. A large portion of the talks in the recent Workshop on Theory of Deep Learning at the Institute for Advanced Study were based on ideas related to the neural tangent kernel. This is a slightly long post, as it involves a fair bit of math (you can skip some of the proofs though). A bit of linear algebra background is necessary to fully grasp what is going on here, but I hope that my visualizations can help with that.
Code for the experiments and animations: https://github.com/rajatvd/NTK
Feedback and suggestions are welcome!