Blog

Learn About Our Meetup

4200+ Members

[R] Understanding and Controlling Memory in Recurrent Neural Networks (ICML’19 oral)

This paper shows that RNNs are able to form long-term memories despite being trained only for short-term with a limited amount of timesteps, but that not all memories are created equal. The authors find that each memory is correlated with a dynamical object in the hidden-state phase space and that the objects properties can quantitatively predict long term effectiveness. By regularizing the dynamical object, the long-term functionality of the RNN is significantly improved, while not adding to the computational complexity of training.

Link to PDF: http://proceedings.mlr.press/v97/haviv19a/haviv19a.pdf

Oral: Tue Jun 11th 03:10 PM @ Room 201

Poster: Tue Jun 11th 06:30 PM @ Pacific Ballroom #258

submitted by /u/DoronHaviv12
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat