A glimpse at vision: First impressions count

Human beings far outpace computers in their ability to recognize faces and other objects, handling with ease variations in size, color, orientation, lighting conditions and other factors.

But how our brains handle this visual processing isn't known in much detail. Researchers at Children's Hospital Boston, taking advantage of brain mapping in patients about to undergo surgery for epilepsy, demonstrate for the first time that the brain, at a very early processing stage, can recognize objects under a variety of conditions very rapidly. The findings were published in the journal Neuron on April 30th.

Visual information flows from the retina of the eye up through a hierarchy of visual areas in the brain, finally reaching the temporal lobe. The temporal lobe, which is ultimately responsible for our visual recognition capacity and our visual perceptions, also signals back to earlier processing areas. This cross-talk solidifies visual perception.

"What hasn't been entirely clear is the relative contribution of these "feed-forward" and "feed-back" signals," says Gabriel Kreiman, PhD, of the Department of Ophthalmology at Children's Hospital Boston and the study's senior investigator. "Some people think that if you don't have feedback, you don't have vision. But we've shown that there is an initial wave of activity that gives a quick initial impression that's already very powerful."

Although feedback from higher brain areas may occur later and is often important, very fast visual processing would have an evolutionary advantage in critical situations, such as encountering a predator, Kreiman adds.

Previous human studies have relied on noninvasive brain monitoring, either with electrodes placed on the surface of the head or with imaging techniques, and have captured brain activity at intervals of seconds – lagging considerably behind the brain's actual processing speeds. Moreover, these techniques gather data from fairly general brain locations. By placing electrodes directly on the brain, the Children's researchers were able to obtain data at extremely high temporal resolution – picking up signals as fast as 100 milliseconds (thousandths of seconds) after presentation of a visual stimulus -- and monitor activity in very discrete, specific locations.

Kreiman collaborated with Children's neurosurgeon Joseph Madsen, MD, who was already doing brain mapping in patients with epilepsy, a procedure that ensures that surgery to remove damaged brain tissue will not harm essential brain functions. The team implanted electrodes in the brains of each of 11 adolescents and young adults with epilepsy (anywhere from 48 to 126 electrodes per patient) in the areas where their seizures were believed to originate. While the electrodes recorded brain activity, the patients were presented with a series of images from five different categories -- animals, chairs, human faces, fruits and vehicles – of different sizes and degrees of rotation.

The recordings demonstrated that certain areas of the brain's visual cortex selectively recognize certain categories of objects, responding so strongly and consistently that the researchers could use mathematical algorithms to determine what patients were viewing, just by examining their pattern of neural responses. Moreover, these responses occurred regardless of the object's scale or degree of rotation. And recognition was evident within as little as 100 milliseconds, too fast for information to be relayed from the visual cortex to the temporal lobe and back again.

Kreiman and Madsen are now extending these studies by showing patients movies – more closely resembling the way we see images in real life. Since each patient is allowed to choose his or her own movie, Kreiman's team must analyze its visual content frame by frame and then link that data to the patient's brain activity.

Why is it important to tease apart visual processing in this way? Kreiman envisions using the vision algorithms discovered in humans to teaching computers how to see as well as people, so that they could help in real-life applications such as spotting terrorists in airports, helping drivers avoid collisions with hard-to-see pedestrians, or analyzing hundreds of tumor samples looking for malignancy. A more futuristic application would be the design of brain-computer interfaces that would allow people with visual impairment to have at least partial visual perception.

Over the last decade, Kreiman and Itzhak Fried, MD, PhD, of UCLA have studied the hippocampus, which is involved in memory, and found individual brain cells that responded consistently when people were shown specific images such as pictures of Jennifer Aniston and Bill Clinton. Kreiman is interested in further exploring the relation between visual processing and memory and incorporating the physiological knowledge into computational algorithms.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Pfizer-BioNTech vaccine provides strong protection against MIS-C in children aged 5–17