Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[R] Learning without feedback: Direct random target projection as a feedback-alignment algorithm with layerwise feedforward training

As there have been some interesting discussions on the alternatives to backpropagation lately (e.g. this reddit thread), I am sharing our latest work just made available on arXiv:

Learning without feedback: Direct random target projection as a feedback-alignment algorithm with layerwise feedforward training

arXiv linkPyTorch code

Summary: Building on feedback-alignment algorithms, we show how to train multi-layer neural networks using random projections of the target vector, which enables layerwise weight updates using only local and feedforward information. The proposed algorithm is called direct random target projection (DRTP). While backpropagation (BP) requires forward and backward weight symmetry (i.e. weight transport problem) and implies update locking before forward and backward passes have been completed, DRTP solves both problems toward higher biological plausibility and low-cost hardware implementation. Indeed, estimating the layerwise loss gradients only requires a label-dependent random vector selection, making adaptive smart sensors and edge computing the ideal applications due to limited power and computing resources. Despite its simplicity, we demonstrate on the MNIST and CIFAR-10 datasets that DRTP performs close to BP, feedback alignment (FA), direct feedback alignment (DFA) algorithms.

The PyTorch code (link above) also includes implementations of FA and DFA.

Feedback is welcome!

submitted by /u/Neurom0rph
[link] [comments]