Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I am currently reading the paper “Memory Aware Synapses: Learning what (not) to forget” (https://arxiv.org/abs/1711.09601) and am trying to figure out how to compute the additional loss term. The weight importance matrix Ω of the current parameters θ is (as I understand) just the gradients of all individual weights. But how is Ω(θ – θ)) computed, specifically Ω(θ)). I tried looking through the official git repository, but was somehow not able to find the answer. Does anyone have experience with this approach?
submitted by /u/Sp4rk4s
[link] [comments]