Scientists at Carnegie Mellon University have created a machine learning technology that utilizes brain activation patterns to identify complex thoughts and sentences, which is, in effect, an ability to “mind read.”
The study worked by assessing the neural activity in 7 participants when they spoke 239 different sentences. These sentences were broken down into neurally plausible semantic features, such as person, setting, social interaction, size and physical interaction. One example of such a sentence is “the witness shouted during the trial.” Each of these features activated different areas of the brain, which the program noted down. The A.I. was then able to identify the features of an additional sentence, despite it having never seen that specific activation pattern before. After cross-validating this multiple times, the program was able to work at an astonishing 87 percent accuracy rate. The program could also work in the other direction, producing the activation patterns of a previously unknown sentence.
“One of the big advances of the human brain was the ability to combine individual concepts into complex thoughts, to think not just of ‘bananas,’ but ‘I like to eat bananas in the evening with my friends,” said Marcel Just, the leading researcher for the project at CMU. “We have finally developed a way to see thoughts of that complexity in the FMRI signal. The discovery of this correspondence between thoughts and brain activation patterns tells us what the thoughts are built of.”
Just went on to explain what the advance might lead to in the future:
Our method overcomes the unfortunate property of FMRI to smear together the signals emanating from brain events that occur close together in time, like the reading of two successive words in a sentence… This advance makes it possible for the first time to decode thoughts containing several concepts. That’s what most human thoughts are composed of… A next step might be to decode the general type of topic a person is thinking about, such as geology or skateboarding. We are on the way to making a map of all the types of knowledge in the brain.
Some of Just’s previous work was key to creating this new technology. He and his team found that the idea of familiar objects would not just activate visual aspects of the brain, but also how you interact with it. If you were to think of a banana, for example, you would consider how you hold it, bite and peel it as well. Moreover, the results of this study prove not only the ability to predict the thoughts of people from their brain scans, but provides further evidence that concept representations are, on a neural level, universal across people, languages, and cultures.
Jack Hadfield is a student at the University of Warwick and a regular contributor to Breitbart Tech. You can like his page on Facebook and follow him on Twitter @ToryBastard_ or on Gab @JH.
COMMENTS
Please let us know if you're having issues with commenting.