[R] Meta-Learning Deep Energy-Based Memory Models
Interesting research from DeepMind:
“Our new work on memory uses a neural network’s weights as fast and compressive associative storage. Reading from the memory is performed by approximate minimization of the energy modeled by the network.”
“Unlike classical associative memory models such as Hopfield networks, we are not limited in the expressivity of our energy model, and make use of the deep architectures with fully-connected, convolutional and recurrent layers.”
“For this to work, stored patterns must be local minima of the energy. We use recent advances in gradient-based meta-learning to write into the memory such that this requirement approximately holds.”
submitted by /u/Quantum_Network
[link] [comments]