As the COVID-19 pandemic has swept the world, researchers have published hundreds of papers each week reporting their findings - many of which have not undergone a thorough peer review process to gauge their reliability.
In some cases, poorly validated research has massively influenced public policy, as when a French team reported COVID patients were cured by a combination of hydroxychloroquine and azithromycin.
The claim was widely publicized, and soon U.S. patients have prescribed these drugs under an emergency use authorization. Further research involving larger numbers of patients has cast serious doubts on these claims, however.
With so much COVID-related information being released each week, how can researchers, clinicians and policymakers keep up?
In a commentary published this week in Nature Biotechnology, University of New Mexico scientist Tudor Oprea, MD, PhD, and his colleagues, many of whom work at artificial intelligence (AI) companies, make the case that AI and machine learning have the potential to help researchers separate the wheat from the chaff.
Oprea, professor of Medicine and Pharmaceutical Sciences and chief of the UNM Division of Translational Informatics, notes that the sense of urgency to develop a vaccine and devise effective treatments for the coronavirus has led many scientists to bypass the traditional peer review process by publishing "preprints" - preliminary versions of their work - online.
While that enables rapid dissemination of new findings, "The problem comes when claims about certain drugs that have not been experimentally validated appear in the preprint world," Oprea says. Among other things, bad information may lead scientists and clinicians to waste time and money chasing blind leads.
AI and machine learning can harness massive computing power to check many of the claims that are being made in a research paper, the suggest the authors, a group of public and private-sector researchers from the U.S., Sweden, Denmark, Israel, France, the United Kingdom, Hong Kong, Italy and China led by Jeremy Levin, chair of the Biotechnology Innovation Organization, and Alex Zhavoronkov, CEO of InSilico Medicine.
I think there is tremendous potential there. I think we are on the cusp of developing tools that will assist with the peer review process."
Tudor Oprea, MD, PhD, Scientist, University of New Mexico
Although the tools are not fully developed, "We're getting really, really close to enabling automated systems to digest tons of publications and look for discrepancies," he says. "I am not aware of any such system that is currently in place, but we're suggesting with adequate funding this can become available."
Text mining, in which a computer combs through millions of pages of text looking for specified patterns, has already been "tremendously helpful," Oprea says. "We're making progress in that."
Since the COVID epidemic took hold, Oprea himself has used advanced computational methods to help identify existing drugs with potential antiviral activity, culled from a library of thousands of candidates.
"We're not saying we have a cure for peer review deficiency, but we are saying that that a cure is within reach, and we can improve the way the system is currently implemented," he says. "As soon as next year we may be able to process a lot of these data and serve as additional resources to support the peer review process."
Source:
Journal reference:
Levin, J. M., et al. (2020) Artificial intelligence, drug repurposing and peer review. Nature Biotechnology. doi.org/10.1038/s41587-020-0686-x.