Jun 21 2017
People who suffer a stroke often undergo a brain scan at the hospital, allowing doctors to determine the location and extent of the damage. Researchers who study the effects of strokes would love to be able to analyze these images, but the resolution is often too low for many analyses.
To help scientists take advantage of this untapped wealth of data from hospital scans, a team of MIT researchers, working with doctors at Massachusetts General Hospital and many other institutions, has devised a way to boost the quality of these scans so they can be used for large-scale studies of how strokes affect different people and how they respond to treatment.
"These images are quite unique because they are acquired in routine clinical practice when a patient comes in with a stroke," says Polina Golland, an MIT professor of electrical engineering and computer science. "You couldn't stage a study like that."
Using these scans, researchers could study how genetic factors influence stroke survival or how people respond to different treatments. They could also use this approach to study other disorders such as Alzheimer's disease.
Golland is the senior author of the paper, which will be presented at the Information Processing in Medical Imaging conference during the week of June 25. The paper's lead author is Adrian Dalca, a postdoc in MIT's Computer Science and Artificial Intelligence Laboratory. Other authors are Katie Bouman, an MIT graduate student; William Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering at MIT; Natalia Rost, director of the acute stroke service at MGH; and Mert Sabuncu, an assistant professor of electrical and computer engineering at Cornell University.
Filling in data
Scanning the brain with magnetic resonance imaging (MRI) produces many 2-D "slices" that can be combined to form a 3-D representation of the brain.
For clinical scans of patients who have had a stroke, images are taken rapidly due to limited scanning time. As a result, the scans are very sparse, meaning that the image slices are taken about 5-7 millimeters apart. (The in-slice resolution is 1 millimeter.)
For scientific studies, researchers usually obtain much higher-resolution images, with slices only 1 millimeter apart, which requires keeping subjects in the scanner for a much longer period of time. Scientists have developed specialized computer algorithms to analyze these images, but these algorithms don't work well on the much more plentiful but lower-quality patient scans taken in hospitals.
The MIT researchers, along with their collaborators at MGH and other hospitals, were interested in taking advantage of the vast numbers of patient scans, which would allow them to learn much more than can be gleaned from smaller studies that produce higher-quality scans.
"These research studies are very small because you need volunteers, but hospitals have hundreds of thousands of images. Our motivation was to take advantage of this huge set of data," Dalca says.
The new approach involves essentially filling in the data that is missing from each patient scan. This can be done by taking information from the entire set of scans and using it to recreate anatomical features that are missing from other scans.
"The key idea is to generate an image that is anatomically plausible, and to an algorithm looks like one of those research scans, and is completely consistent with clinical images that were acquired," Golland says. "Once you have that, you can apply every state-of-the-art algorithm that was developed for the beautiful research images and run the same analysis, and get the results as if these were the research images."
Once these research-quality images are generated, researchers can then run a set of algorithms designed to help with analyzing anatomical features. These include the alignment of slices and a process called skull-stripping that eliminates everything but the brain from the images.
Throughout this process, the algorithm keeps track of which pixels came from the original scans and which were filled in afterward, so that analyses done later, such as measuring the extent of brain damage, can be performed only on information from the original scans.
"In a sense, this is a scaffold that allows us to bring the image into the collection as if it were a high-resolution image, and then make measurements only on the pixels where we have the information," Golland says.
Higher quality
Now that the MIT team has developed this technique for enhancing low-quality images, they plan to apply it to a large set of stroke images obtained by the MGH-led consortium, which includes about 4,000 scans from 12 hospitals.
"Understanding spatial patterns of the damage that is done to the white matter promises to help us understand in more detail how the disease interacts with cognitive abilities of the person, with their ability to recover from stroke, and so on," Golland says.
The researchers also hope to apply this technique to scans of patients with other brain disorders.
"It opens up lots of interesting directions," Golland says. "Images acquired in routine medical practice can give anatomical insight, because we lift them up to that quality that the algorithms can analyze."