Jul 31 2018
With the cost of drugs a critical issue in health care, health insurance companies and government payers need to understand how new and existing drugs compare in terms of benefits and risks.
But there's a problem. When drugs are first approved, they have typically been compared in clinical trials to either a placebo or to one standard of care, which is an established treatment that has previously been widely accepted. However, there may be multiple drugs on the market that have already been shown to be better than the standard. And in diseases with high unmet needs, drugs may even be approved without any comparisons.
"This," says David Cheng, a postdoctoral researcher at Harvard's T.H. Chan School of Public Health, "limits our ability to compare the effectiveness of new drugs to all the other available treatment options that are out there."
To get around this problem, people often engage in "a kind of naïve comparison," says Cheng. "They'd look, say, at the rates of survival for a cancer drug by a given time in one study and then compare them to another, even though the two studies would not be directly comparable. The patients might have more late-stage disease in one study and more early-stage disease in the other, or some other significant difference in patient characteristics, and this wouldn't be taken into account in the analysis. You'd end up with massive confounding."
Dealing with such confounding bias is especially challenging as analysts and researchers often only have access to the full individual patient-level data for the new drug and must rely on data summaries from academic publications for existing drugs on the market.
To overcome the problem, analysts and researchers have turned to a method called matching-adjusted indirect comparison (MAIC). "If you have access to the individual-level data from one drug trial," says Cheng, "then you could reweight the observations or adjust the final analysis so that the patient characteristics match the summaries of another trial." Results provided by MAIC have been used in more than 20 successful reimbursement submissions and included in guidance on indirect comparisons issued by the National Institute of Clinical Excellence in the UK.
Despite the increasing use of MAIC to inform drug reimbursement decisions, the statistical performance of MAIC has not been extensively studied or reported. Research conducted by Cheng and managing principal James Signorovitch and colleagues from Analysis Group--a global consulting firm with expertise in health economics and outcomes--is the first to identify conditions under which MAIC is valid. If applied correctly, MAIC can provide unbiased estimates of a treatment effect when patient populations between trials are sufficiently similar, and the probability an individual is selected into one trial versus another can be adequately modeled. It also compares the potential for bias through simulations to some other common approaches to such comparisons across studies.
"This work can help decision-makers understand when MAIC results are reliable and when there are challenges in the data that would produce unreliable results," says Cheng. "This could, in turn, enable better decision-making and ultimately inform smarter allocation of resources to drugs that work best."