Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Convergent evolution in next-generation neural networks?

In the research quest for next-generation neural networks, it seems like two “schools” of thinking

are converging to something somewhat similar even after approaching it from different angles.

I’m referring to the HTM/sparse coding/bio-inspired group and the “classical” deep learning group.

Historically, the principles espoused by the former group (e.g. Numenta, Ogma Neo, Sparsey) have not

gotten much traction in the ML community due to lack of experimental results.

However, more recently, approaches from within the deep learning community have begun to somewhat resemble the former’s approach(es). For example, OpenAI is now intensely interested in extremely sparse networks ( https://supercomputersfordl2017.github.io/Presentations/SmallWorldNetworkArchitectures.pdf ) and Geoff Hinton is working on capsules, which at least partially inspired by cortical columns ( https://numenta.com/blog/2017/12/18/comparing-capsules-with-htm/) and have some similarities.

Just as interestingly, it appears that some non-backpropagation local learning algorithms, such as Hebbian learning ( https://arxiv.org/abs/1908.08993 ) can actually scale to CIFAR and a small version of ImageNet. Of course, these results are very preliminary, but are at least somewhat interesting.

So I’d like to hear people’s thoughts on this. Maybe there might be an interesting convergent evolution phenomenon where the approaches to next-generation NNs end up being somewhat similar.

submitted by /u/darkconfidantislife
[link] [comments]