Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[R] Understanding and Controlling Memory in Recurrent Neural Networks (ICML’19 oral)

This paper shows that RNNs are able to form long-term memories despite being trained only for short-term with a limited amount of timesteps, but that not all memories are created equal. The authors find that each memory is correlated with a dynamical object in the hidden-state phase space and that the objects properties can quantitatively predict long term effectiveness. By regularizing the dynamical object, the long-term functionality of the RNN is significantly improved, while not adding to the computational complexity of training.

Link to PDF: http://proceedings.mlr.press/v97/haviv19a/haviv19a.pdf

Oral: Tue Jun 11th 03:10 PM @ Room 201

Poster: Tue Jun 11th 06:30 PM @ Pacific Ballroom #258

submitted by /u/DoronHaviv12
[link] [comments]