We are notoriously bad at simultaneously doing different tasks with the same bit of brain. For instance, in his book Seeing Voices,Oliver Sacks points out that it’s known to be extremely difficult to sign in American Sign Language while speaking in English, which has a completely different syntax and grammar. Likewise, it is thought to be impossible for someone to write Chinese while speaking English. So, if the tongue and the skin are providing imagery to the brain (as discussed in the last post), it may be important that it’s complimentary to that coming from the eyes: enriching the visual stream rather than feeding in some completely new kind of information.
However, this may not be so easy, as the US military are finding out. A new program called the Multi-Spectral Adaptive Networked Tactical Imaging System (MANTIS), is being developed by Raytheon, Sarnoff, and Rockwell Collins Optoelectronics. The project involves providing different kinds of images to each eye: this could be high resolution for one, a larger field of view to the other; a natural image for one, a processed image with a target picked out for the other. That kind of thing. The idea is simply to feeding as much information into the soldier as possible so he can see the window where the sniper is and the whole building at the same time.
While the technology companies have been focusing on, well, the technology, David Curry, Lawrence Hamilton and Darryl Hopper at the Airforce Research Lab (Wright-Patterson Air Force Base, Ohio) have been trying to figure out how well the new system is likely to work once you add a human user. They have discovered that, though the brain should be able to fuse the images—they are, after all, of the same thing—it has trouble. With poor design, they say, this ‘dichoptic’ technology may reduce ability to perceive depth, contrast sensitivity, and reaction time. Not something you’re looking to do to a fighter.
For those who are trying to design new systems intended to sense the environment and feed information into the brain, this points to one area that will have to be studied and understood: how much information is too much to take in.
Our brains can’t process everything and have to decide what to pay attention to, based on whether it thinks information is likely to be reliable and/or useful. A recent study from the University of Pennsylvania suggests that the retina transmits pulses the brain at about 10Mb/s (about the same as an Ethernet connection), but the amount of information these hold is likely to be much more because of the complex temporal coding of the spikes. Even without the other senses, that’s a lot of information to process in real time, and we cannot consciously take all this in at once.
So our brain filters out the things that are not relevant (such as background images or sounds that stay the same) and make us concentrate on things that are new or changing (like moving objects or loud noises) or specifically relevant to what we’re doing. We can only pay attention to a limited amount of information, both overall and within any particular sense.
Engineers, military and otherwise, will have to figure out how to balance between overload and omniscience.
Figure: The concept of dichoptic displays, from Curry et al., Proc. SPIE 6224.
Originally posted on Books on Brains and Machines.