Learn About Our Meetup

5000+ Members



Join our meetup, learn, connect, share, and get to know your Toronto AI community. 



Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.



Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[Research] Question Regarding “Deep Convolutional Spiking Neural Networks for Image Classification”

Paper can be found here:

I am currently investigating the research into Spiking Neural Networks using Spike-timing-dependent plasticity learning, initially regarding image processing.

I have now read several papers that discuss “Spiking Convolutional Neural Networks”, and cannot understand how these particular networks can be convolutional in nature, at least in the same way as backprop trained CNNs are.

The kernels in the standard CNN are trained by backprop against every possible patch or feature in the preceding layer. So a kernel can detect a feature at any point in the preceding layer.

While you can definitely use “kernels” with these spiking networks, they would only apply to a set patch of the image right? If you wanted something that ran over all the patches your “kernel neuron” would just end up with a link from each neuron in the previous layer and it would be a mess. Or you would need to duplicate this kernel neuron x number of times depending on the size of the input, and find some way to keep these neurons with the same input weights.

What am I misunderstanding here? Do you simply end up with a whole pile of duplicate kernels across all the patches? This would definitely work but is it optimal?

submitted by /u/bitcoin_analysis_app
[link] [comments]

Toronto AI is a social and collaborative hub to unite AI innovators of Toronto and surrounding areas. We explore AI technologies in digital art and music, healthcare, marketing, fintech, vr, robotics and more. Toronto AI was founded by Dave MacDonald and Patrick O'Mara.