[D] Would something like this be feasible?
What if someone wore a brain scanner / portable FMRI machine on their head, as well as a Google Glass type device with a camera, microphone, etc., and then just went about their lives for some significant amount of time. Then, since there would be so much labeled data with a correspondence from their brain imaging to the major sensory inputs they were experiencing at the time, you could create a model that could then reconstruct their real-time brain imaging into a video that others could watch. So, theoretically, you could do things like translate brain activity in someone’s dreams into a video, or otherwise read people’s minds.
submitted by /u/ehtsu
[link] [comments]