Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[R] You can find a lot of interesting things in the loss landscape of your neural network

[R] You can find a lot of interesting things in the loss landscape of your neural network

Just sharing with you a small (and somewhat fun) project I was recently working on, which is about finding different patterns in the loss surface of neural networks. Usually, a landscape around a minimum looks like a pit with random hills and mountains surrounding it, but there exist more meaningful ones, like in the picture below (check the paper for more results). We have discovered that you can find a minimum with (almost) any landscape you like. An interesting thing is that the found landscape pattern remains valid even for a test set, i.e. it is a property that (most likely) remains valid for the whole data distribution.

https://preview.redd.it/t885u6vosow31.png?width=1810&format=png&auto=webp&s=793644af78a5430368e7a1c05d7b38c6b02ec637

Paper: https://arxiv.org/abs/1910.03867
Code: https://github.com/universome/loss-patterns

submitted by /u/universome
[link] [comments]