01 October 2005

Sharing Minds

The latest issue of Nature had a story about new technology for assisting stroke victims and paraplegics. Because the act of thinking about moving your body activates the same neurons that are involved in the act of moving itself, it is possible to measure the relevant neuronal activity, decode the intent, and control devices accordingly (G. Pfurtscheller and C. Neuper, "Motor imagery and direct brain-computer communication," Proceedings of the IEEE, vol. 89, pp. 1123-1134, 2001.). The technique has been used to enable paralyzed patients to control robots and to ‘key’ messages into computers. Until now, though, it involved the fairly invasive process of implanting sensors directly into the brain.

At a conference earlier this week on virtual reality and ‘telepresence’, held at University College London, a group from Graz, Austria, reported on experiments in which several people wearing what looks to be a modified bathing cap covered with sensors were able to direct a simulated walk through a virtual environment (Leeb R., et al., “Walking from thoughts: not the muscles are crucial, but the brain waves!, Presence 2005.pdf, 2005; the link is to the entire 15 MB meeting proceedings, so link cautiously). The sensor cap does not measure the neuronal pattern directly, but rather the EEG, from which the computer derives a control signal. It is not an easy device to work, requiring a training process that can be difficult. One of the authors said that it took him about five hours to learn the fairly simple control of moving versus standing still.

This is, of course, a good thing with tremendous potential for enhancing the lives of the physically disabled. And yet, it has me wondering.

Imagine how this technology might develop. At present, it only measures integrated signals. Given our understanding of how to map out neuronal activity with fMRI and PET and other tomographic methods, one can anticipate using devices similar to the Graz brain interface cap to infer the full map of brain activity associated with motor actions. Of course, it would require different kinds of sensors, which would themselves require significant miniaturization of existing devices. But none of this development is forbidden by the laws of physics.

Now, suppose we have two people wearing these “full capability brain interface” caps. Connect both to the computer. Call one the agent and the other the replicant (for reasons that will become obvious). We don’t require that the agent imagine an activity; she can actually perform it. In this way, we get the neuronal elements of both the intentional and performance aspects of the activity, including all the musculoskeletal feedbacks. And we don’t require that the computer actually map out the agent’s neuronal patterns. We just have the computer make some comparison between the measurements on the agent and those on the replicant. (It may be that this is most accurately done by mapping both brains and comparing the maps, but it isn’t clear that that is essential.) This gives us a difference signal.

That signal could be quite complex, carrying information about a number of characteristics distinguishing the two neuronal maps. Such complex difference signals could be constructed to be fairly vivid, e.g. by coding them as music, using multiple pitches, timbres, etc. The result, then, is something that could be used as a feedback signal, which we give to the replicant.

Would it be possible to use that feedback process to train the replicant to duplicate the neuronal map of the agent, by carrying out the same activity? (There are some difficult details here on how you change the signal to encourage changes toward, or discourage changes away from the agent’s map, but those seem to be technical, rather than conceptual, problems.) Would it be possible to do this not just for motor activity but for perceptual activity? For feelings? When the feedback signal has been zeroed, are the replicant’s thought the same as the agent’s?

Philosophers and cognitive scientists assure us that the description of a mental state is not the same thing as having the mental state itself. But what if the description of the mental state is a perfect replication of the mental state?

As I said at the outset, there are obvious technical difficulties in implementing these “mind-sharing” caps. But it ought to be feasible to try out elements of this process with existing measurement devices. You could pipe the signal from an agent in one MRI machine out to a lab in another hospital where the replicant is in a second MRI machine. The rest is just software. It would be an interesting test, I think.


Post a Comment

<< Home