Join our meetup, learn, connect, share, and get to know your Toronto AI community.
Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.
Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.
I’m a student researcher looking for literature on neural network parameter optimization where the objective loss is non-smooth. Meaning that that the typical gradient based methods are ruled out and something like proximal gradient methods are employed. Preferably in the context of regression. This condition seems to be commonly ignored in practice.
I have many more questions, but really any direction or content would be helpful! Thanks!
submitted by /u/groovyJesus
[link] [comments]