Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[Research] Question Regarding “Deep Convolutional Spiking Neural Networks for Image Classification”

Paper can be found here: https://arxiv.org/pdf/1903.12272.pdf

I am currently investigating the research into Spiking Neural Networks using Spike-timing-dependent plasticity learning, initially regarding image processing.

I have now read several papers that discuss “Spiking Convolutional Neural Networks”, and cannot understand how these particular networks can be convolutional in nature, at least in the same way as backprop trained CNNs are.

The kernels in the standard CNN are trained by backprop against every possible patch or feature in the preceding layer. So a kernel can detect a feature at any point in the preceding layer.

While you can definitely use “kernels” with these spiking networks, they would only apply to a set patch of the image right? If you wanted something that ran over all the patches your “kernel neuron” would just end up with a link from each neuron in the previous layer and it would be a mess. Or you would need to duplicate this kernel neuron x number of times depending on the size of the input, and find some way to keep these neurons with the same input weights.

What am I misunderstanding here? Do you simply end up with a whole pile of duplicate kernels across all the patches? This would definitely work but is it optimal?

submitted by /u/bitcoin_analysis_app
[link] [comments]