Skip to main content

Blog

Learn About Our Meetup

5000+ Members

MEETUPS

LEARN, CONNECT, SHARE

Join our meetup, learn, connect, share, and get to know your Toronto AI community. 

JOB POSTINGS

INDEED POSTINGS

Browse through the latest deep learning, ai, machine learning postings from Indeed for the GTA.

CONTACT

CONNECT WITH US

Are you looking to sponsor space, be a speaker, or volunteer, feel free to give us a shout.

Machine learning for MEG during speech tasks

Vector Faculty Member, Frank Rudzicz, calls language a “lens into one’s cognition” because how and what we say can reveal a lot about how we feel, and how we think. In research recently published in the scientific journal Nature, Rudzicz and his PhD student at Vector, Demetres Kostas and collaborator Elizabeth Pang, consider how one’s brain signals can provide a lens into how we speak.

Specifically, the team considered whether a deep neural network trained with monitored brain signals can be used to predict the ages of 92 children (aged 4-18) performing speech tasks, such as verb generation and specified babble. This used a technology called magnetoencephalography (MEG) – a massive device that gently reads signals in the brain using 151 sensors to record data across various locations on the scalp while a person speaks. The results were able to achieve 95% accuracy on a binary task identifying the age of the speaker from their brain signals alone and suggest that the deep neural network makes predictions based on differences in speech development – how healthy kids learn to speak as revealed by their brain signals. In this research, the team also highlighted the differences between traditional machine learning, and modern deep methods on performing tasks such as this.

This work is the first step towards understanding how speech originates in the brain. While the next steps are still fairly theoretical, Demetres and the team are interested in applying methods from ‘explainable AI’ to making the interpretation of brain signals more understandable to clinicians and researchers. Being able to map healthy speech production will have a variety of uses, including helping people who have difficulty speaking through computer interfaces.

 

Check out the full paper here: Machine learning for MEG during speech tasks