Peering into the body and visualizing its molecular secrets one step closer to reality

Peering into the body and visualizing its molecular secrets, once the stuff of science fiction, is one step closer to reality with a study from researchers at the Stanford University School of Medicine and the University of California, San Diego School of Medicine.

The research team is reporting that by looking at images from radiology scans - such as the CT scans a cancer patient routinely gets - radiologists can discern most of the genetic activity of a tumor. Such information could lead to diagnosing and treating patients individually, based on the unique characteristics of their disease. The study will be published May 21 in the advance online edition of Nature Biotechnology.

"Potentially in the future one can use imaging to directly reveal multiple features of diseases that will make it much easier to carry out personalized medicine, where you are making diagnoses and treatment decisions based exactly on what is happening in a person," said co-senior author Howard Chang, MD, PhD, assistant professor of dermatology at Stanford, who led the genomics arm of the study.

The study's other senior author is Michael Kuo, MD, assistant professor of interventional radiology at UCSD, who said their work will help doctors obtain the molecular details of a specific tumor or disease without having to remove body tissue for a biopsy. "Ideally, we would have personalized medicine achieved in a noninvasive manner," said Kuo, who spearheaded the project in 2001 while he was a radiology resident at Stanford.

In some ways, the work brings to mind a device that science fiction fans may recall from the TV series, "Star Trek." "In almost every episode of 'Star Trek,' there is a device called a tricorder, which they used noninvasively to scan living or nonliving matter to determine its molecular makeup," said Chang. "Something like that would be very, very useful."

In real life, this approach would avoid the pain and risk of infection and bleeding from a biopsy and would not destroy tissue, so the same site could be tested again and again.

At the time the project started at Stanford in 2001, the medical school was ground-zero for studies of DNA microarrays - the lab tools that can screen thousands of genes at a time, developed by biochemistry professor Patrick Brown, MD, PhD. Microarrays have proven to be extremely useful for identifying groups of genes that are more active or less active in a disease such as cancer, compared with normal tissue.

"Radiology - while making great technological advances towards capturing more and more information - seemed to be largely oblivious to a fundamental shift in medicine towards genomic, personalized medicine that was beginning to take place," said Kuo, who is also the director of the Center for Translational Medical Systems at UCSD. "Being there at Stanford, I was aware of that shift and I was trying to think of what are the ways that we as radiologists could merge and integrate that data so we could take advantage of it."

A problem with using biopsied material for microarrays is that the tissue is destroyed in the process. Thus, there is no opportunity to re-test the same tissue after, say, a course of chemotherapy. Imaging through MRI or CT, however, leaves all organs intact and functioning.

To increase the research team's expertise in the areas of genomics and computational biology, Kuo brought in Chang and the paper's lead author, Eran Segal, PhD, in 2004. Chang had been using the gene activity patterns of microarrays to predict cancer outcome. Segal developed algorithms during his doctoral studies at Stanford that played a critical role in the analysis of the massive amounts of data encompassed in the study.

"When we look at noninvasive images, there are lots and lots of different patterns that had no known meaning," said Chang. "We thought that maybe we could come up with a way to systematically connect the gene activity seen with microarrays to imaging patterns, to translate meaning into three different types of languages, from genes to images and then to outcome of the disease process."

Their method was similar to what archeologists might do to recover a lost language. Chang compared their process-translating genetic activity patterns into medical imaging terminology - to the breakthrough that occurred when archeologists uncovered the Rosetta Stone in 1799. On the stone was the same text written in three versions: hieroglyphics, Egyptian and Greek languages. Every time certain letters showed up in Greek, a certain set of symbols would show up in hieroglyphics. That correspondence allowed previously undecipherable hieroglyphic writing to be understood.

The first step for the researchers was the equivalent of finding words for the hieroglyphics: to define the language of radiology. Kuo and his radiology colleagues initially defined mutually agreeable terminology for more than 100 features that appeared on scans. As their work progressed, they found they only needed 28 of them to capture maximal information.

They then matched those imaging features with a vast stockpile of microarray data generated from human liver cancer samples. They also could compare their data with how the cancer patient fared.

What they found is that two very different aspects of cancer - how it looks by imaging and how it behaves on a molecular level - have a strong connection. Out of the 5,000 or more genes that have different activity in cancerous tissue, the researchers could reconstruct 80 percent of gene expression based on looking at standard CT scans the patients had undergone.

"Clearly, we are very far from clinical applications of these tools that we developed," said Segal, who is now a computational biologist at the Weizmann Institute of Science in Rehovot, Israel. "But the fact that we saw strong connections between the imaging features and the molecular gene activity data suggests that this could be a promising and fruitful research direction."

Much like being able to identify the aromas from wine once the lexicon of wine-tasting is realized, radiologists - already experts in recognizing the visual differences between normal and pathological tissues - simply need to know what to look for and what it means.

"They already have the skills, so it's not a quantum leap by any stretch - if this were to be validated ultimately on a large-scale-for this to be implemented," said Kuo.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
How AI is advancing mammographic density-based breast cancer risk prediction