Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Alternatives to Backpropagation

As now it is widespread that backpropagation is not a biologically plausible approach, I would like to raise a discussion around alternatives for the method.
In my mind, a cool idea would be to evaluate the outputs of each layer individually, i.e., what should we expect to see as output for the hidden layer number L? This would remove the need of backward sweeps (because a layer’s ‘accuracy’ would depend only of itself) and make transfer learning a lot easier (cause if it’s a layer-by-layer learning, we can put pieces together for similar task, with minor adjusments if necessary, e.g. the first layer of a CNN that identifies cats might be useful to identifying other felines).
However, nothing comes to my mind as to how we could achieve that. Because, as I see, this would require us to have labels (or at least some representations for us to compare what we’re getting to what we want) and I don’t think labels are required when we humans learn (at least not too many labels).
Anyway, I’d love to hear ideas from other minds, as I think this is the best way for us to come up with newer ideas.
Cheers guys, have a good one 🙂

submitted by /u/Berdas_
[link] [comments]

Next Meetup

 

Days
:
Hours
:
Minutes
:
Seconds

 

Plug yourself into AI and don't miss a beat

 


Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.