There’s a cool new paper on brain-to-brain interfaces, and I’m sure it will be misinterpreted in the media, just like every other paper of this genre. 🙂
There have been a couple of demonstrations of brain-to-brain interfaces before, and I’ve written about one of the previous papers in Finnish. The idea of BBI is that information about one participant’s brain state is read (typically using EEG), and then the brain state of another participant is manipulated, typically using TMS. At least in the press, these demonstrations are often termed as “telepathy” or “mind-reading”, and illustrated by images from Star Trek, X-Men or just people in tinfoil hats. Often the implications of these studies is posed as a question: will we soon be able to communicate with each other directly, without language? Will all our brains be connected together to form a giant common consciousness?
The answer to both questions is no. (At the moment, at least.) While these papers are very clever, they are also a bit gimmicky. They are relevant, cutting edge scientific research, and the results, as well as the results from related research of brain-computer interfaces (BCI) can be useful in e.g. helping people who can’t express themselves in language or movement to communicate. But they are not about telepathy, and not even very impressive in terms of mind-reading. We all read more from other persons’ minds in a second, just by looking at their face, than these interfaces do in a day. Not only is the data transfer speed very slow, but mainly the content of what is being read is, as of yet, rather uninteresting.
The main issue to understand here (apart from the fact that these studies are awesome and cool), is that what is “read” from one brain and then “written” to another, is just an artificial representation of the information available for the participants. In other words, instead of these rich concepts, such as “dog” or “shark”, what is transmitted is binary information (yes-no), using brain processes that have nothing to do with those concepts.
Let me unpack this. In this latest study, the “respondent” participant was given an animal to “think about”, e.g. “dog” or a “shark”. Crucially, the BBI is not dependent on figuring out how that person thinks about a dog, or where in the brain the information about dogs is stored and how. Instead, the “inquirer”, the other participant, was asking a yes-no question about the animal, e.g. whether it is a pet or not, and the respondent then, responded by looking at one of two flashing lights, labelled yes and no, respectively.
So yes, the respondent was thinking about either a dog or a shark, and selected which light to stare based on which animal they were thinking about, but the information that was extracted from their brain had nothing to do with either animal, but everything to do with the frequency of which the two lights were flashing: yes-light was flashing 12 times per second, no-light one hertz faster, 13 times per second. The researchers then monitored the activity of the respondent’s visual cortex with EEG, a relatively straight-forward thing to do, and as the electric activity in the visual cortex is coupled with the visual stimulus, they could figure out which lamp the respondent was looking at.
So, nothing about dogs, but still a cool signal-processing feat to differentiate between whether the participant looked at a 12 or 13 Hz light. Now, this yes-no information was passed to a computer, which controls a transcranial magnetic stimulation (TMS) device. This is a machine that has a coil that is placed on top of the patient’s or participant’s head, and then it delivers magnetic pulses that are focused on a specific part of the participant’s cortex. Depending on the stimulation frequency, it either stimulates its activity or decreases it. This allows neuroscientists to study causal links between parts of the cortex and cognitive abilities: by stimulating the Broca’s area in the inferior frontal gyrus, one can for example disrupt the process of naming the object that the participant sees, therefore showing that this part of cortex is a vital link in producing speech.
If one stimulates the visual cortex, the participant reports seeing a bright light, with the position of stimulation correlating with the position in the field of view where the light appears. These TMS-induced “hallucinations” are called phosphenes, and they were used in this study to “write” or deliver the yes-no information from the respondent back to the inquirer. Each participant has a threshold of how intensive the TMS stimulation should be, above which the phosphenes appear. Stimulate the same location but with less intensive pulses, and phosphenes won’t appear. Here’s the yes-no: again, nothing to do with dogs or other animals, just a binary signal that the participant is able to detect and then figure out whether the answer to her questions was yes or no.
So, to summarise, in this cool demonstration, one participant was given an animal to think about, and they had to answer three yes-no questions by looking at one of two flashing lights. EEG of the visual cortex was used to differentiate, which lamp the person was looking at. This answer was delivered back to the inquirer by TMS, either above or below their phosphene threshold, who then could interpret the answer as a yes or no, based on whether they experienced the shinies or not. So, ones and zeros instead of animals, and flashing lights instead of telepathy. Still very cool.
Stocco, A., Prat, C., Losey, D., Cronin, J., Wu, J., Abernethy, J., & Rao, R. (2015). Playing 20 Questions with the Mind: Collaborative Problem Solving by Humans Using a Brain-to-Brain Interface PLOS ONE, 10 (9) DOI: 10.1371/journal.pone.0137303