Blog

Learn About Our Meetup

4500+ Members

[R] You can find a lot of interesting things in the loss landscape of your neural network

[R] You can find a lot of interesting things in the loss landscape of your neural network

Just sharing with you a small (and somewhat fun) project I was recently working on, which is about finding different patterns in the loss surface of neural networks. Usually, a landscape around a minimum looks like a pit with random hills and mountains surrounding it, but there exist more meaningful ones, like in the picture below (check the paper for more results). We have discovered that you can find a minimum with (almost) any landscape you like. An interesting thing is that the found landscape pattern remains valid even for a test set, i.e. it is a property that (most likely) remains valid for the whole data distribution.

https://preview.redd.it/t885u6vosow31.png?width=1810&format=png&auto=webp&s=793644af78a5430368e7a1c05d7b38c6b02ec637

Paper: https://arxiv.org/abs/1910.03867
Code: https://github.com/universome/loss-patterns

submitted by /u/universome
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.