Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

[D] Is Neuroscience background useful for ML research?

Is there anybody with neuroscience background who moved to ML research? Any ML researchers who deliberately decided to learn some neuroscience to get new ideas?

Would you say it was worth it to learn neuroscience for you? Would you say it would be better to just focus purely on ML?

I am finishing my CS undergrad and deciding between two options for grad school:

1) PhD in Optimization/ML theory

2) MS in Neuroinformatics (and then eventually going for PhD in ML theory)

I am generally interested in learning neuroscience and understanding how the brain works. However, it seems the theory is not quite here yet, and I do not want to work on experimental biological side. I ultimately want to work on ML theory as I think ML has the most impact here and now, while it will likely take some decades until neuroscience is sufficiently developed. That is why I am considering to learn some core neuroscience concepts, and then try to apply those concepts to find novel ideas for ML.

The Neuroinformatics MS program is quite flexible and will allow me to be primarily focused on ML and open-ended research, while 1/3 of my courses will be in neuroscience. I will work on bio-plausible backprop (some references are here) and maybe spiking neural networks. Somewhat unrelatedly, being there may give me some insight into brain-computer interfaces research while there is growing interest in that.

I think doing that MS will give me more diverse background and ideas for further research in ML theory, and open more doors. However, I am somewhat concerned if it is worth it, wouldn’t doing pure ML leave me in in a better position?

Also, I am a bit sceptical about bio-plausible ML research. While it is really interesting, it seems to be a bit of a “toy” problem. We don’t even know if something like backprop happens in the brain, so trying to make it more “bio-plausible” for its own sake is somewhat of an artificial problem.

There was a related discussion: [D] Computational Neuroscience and Machine Learning

submitted by /u/Slayer10101
[link] [comments]