Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Memory Aware Synapses (MAS): how to compute additional loss?

I am currently reading the paper “Memory Aware Synapses: Learning what (not) to forget” (https://arxiv.org/abs/1711.09601) and am trying to figure out how to compute the additional loss term. The weight importance matrix Ω of the current parameters θ is (as I understand) just the gradients of all individual weights. But how is Ω(θ – θ)) computed, specifically Ω(θ)). I tried looking through the official git repository, but was somehow not able to find the answer. Does anyone have experience with this approach?

submitted by /u/Sp4rk4s
[link] [comments]