Skip to content ↓

Brain imaging distinguishes between seeing and imagining

With your eyes closed, picture a familiar face -- Bill Clinton, for instance -- or a place -- the Grand Canyon, maybe, or Fenway Park. By looking at data from your brain at work, Associate Professor Nancy Kanwisher can read your mind.

The MIT brain expert and a colleague now at a Canadian research center can tell with 85 percent accuracy (at least in their best subjects) whether you are thinking of a face or a place.

And while it may seem that there is a big difference between looking at a real face or place and imagining a face or place, the difference is not very big to our brains. In fact, the two different actions create very similar patterns of response.

This research, published November 1 in the Journal of Cognitive Neuroscience, proves that "we use some of the same brain machinery when we actively see and when we simply imagine," said Professor Nancy Kanwisher of the Department of Brain and Cognitive Sciences.

She and Kathleen O'Craven of the Rotman Research Institute in Toronto used a brain imaging technique called functional magnetic resonance imaging (fMRI) to demonstrate that a particular part of the cortex is used when we see or think about faces, and a different cortical region is used during the perception and imagery of places.

"These findings strengthen evidence that imagery and perception share common processing mechanisms and demonstrate that the specific brain regions activated during mental imagery depend on the content of the visual image," the researchers wrote.

"Our ability to use fMRI to see clear and recognizable signatures of single cognitive events opens up a broad new landscape of future work exploring the neural correlates of thought," Professor Kanwisher said.

While past studies have suggested that "seeing with the mind's eye" engages many of the same mechanisms involved in visual perception, they have not proved it conclusively. One problem: can researchers ever be sure that their subjects are not forming a mental image of something?

One approach has been to ask subjects to rest before the imaging part of the experiment. "But what do you do when you rest? You mentally [form an] image," Professor Kanwisher said.

The researchers avoided this problem by seeking to answer whether particular kinds of mental images use specific parts of the cortex, whether a visual perception creates a stronger fMRI response than a mental one, and whether fMRI signals during mental imagery are clear enough to allow researchers to categorize the kind of mental image based solely on the results of the brain scan.

Places and faces

Several years ago, Professor Kanwisher and colleagues identified a part of the brain called the parahippocampal place area (PPA), which responds strongly to images of indoor and outdoor scenes depicting the layout of space, but does not respond at all to faces. Another region called the fusiform face area (FFA) responds strongly when subjects view photographs of faces but weakly to other images.

Using this information, the researchers looked at brain scan results in which subjects actually looked at photographs of places or faces and were asked to create, with their eyes closed, a mental image of the same faces and scenes.

The resulting data "reveal a striking similarity between regions activated during (mental) imagery and those activated during (visual) perception," the authors wrote.

They then compared the magnitude of the response to visually presented images and mentally pictured images. In every case, the results were stronger for visual perception than for mental images.

In the final experiment, by looking at peaks on the graphed data, the researchers could tell whether the person heard the name of a person or a place. The category of the stimulus was identified correctly on 85 percent of the trials.

This study was supported by the Bunting Institute at Radcliffe College and grants from the National Institute of Mental Health, the Human Frontiers Science Program and the Dana Foundation.

A version of this article appeared in MIT Tech Talk on November 8, 2000.

Related Topics

More MIT News