Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[R] Neural Networks with non-smooth loss?

I’m a student researcher looking for literature on neural network parameter optimization where the objective loss is non-smooth. Meaning that that the typical gradient based methods are ruled out and something like proximal gradient methods are employed. Preferably in the context of regression. This condition seems to be commonly ignored in practice.

  1. Are non differentialable losses avoided in NN’s?
  2. Is there a need for this kind of work from a non theoretical point of view? That is, smoothness conditions are violated, but gradient methods still find empirical success?

I have many more questions, but really any direction or content would be helpful! Thanks!

submitted by /u/groovyJesus
[link] [comments]