Apr 16 2011
Quantitative data should not be disclosed to all patients, article argues, and a commentary calls for research into how best to present information about risk
Giving patients data about the risks and benefits of a medical intervention is not always helpful and may even lead them to irrational decisions, according to an article in the Hastings Center Report. That finding calls into question whether it is essential to disclose quantitative data to patients to help them make informed decisions. An accompanying commentary calls for experimental evidence to determine the best way to provide information to patients.
The analyses come at a time when many patient advocates and others are embracing the "quantitative imperative" - the obligation to disclose risk-related data to patients to ensure informed consent and promote shared decision-making. Because patients often do not get information about all of their options by talking to their health care providers, decision aids - pamphlets, videos, and computer programs - increasingly are being used to convey such data more comprehensively. There are more than 500 decision aids and more than 55 randomized controlled trials studying their impact. A recent review concluded that decision aids increase patient knowledge and the feeling of being informed while decreasing indecision and passivity.
However, disclosure of quantitative data can backfire. "There are important problems with it stemming from the way people understand and respond to numerical and graphical information," writes Peter H. Schwartz, a faculty investigator at the Indiana University Center for Bioethics. An accompanying commentary by Peter Ubel, professor of marketing and public policy at Duke University, agrees with Schwartz's analysis of the numeracy problem, and argues that there ways to present risk that overcome some of the problems.
One problem is that more than half of adults have significant difficulty understanding or applying probabilistic and mathematical concepts. National surveys suggest that at least 22 percent of adults have only the most basic quantitative skills, such as counting, while another 33 percent fare only slightly better and are able to do simple arithmetic.
But even people who have a good grasp of probability and math are prone to biases in how they interpret data on risks, Schwartz says, citing 30 years of psychology literature. They may give exaggerated importance to small risks or, conversely, exhibit "optimism bias" and exaggerate the chance that they will be in the "lucky" group. "Which of these biases come into play in a given situation-. depends on the individual's psychology and the way the information is presented," he writes. Either way, the bias can lead patients to make decisions about medical interventions that are not based on reason or facts.
Schwartz cites as an example the interpretation of the new mammography guidelines announced by the United States Preventive Services Task Force in 2009, which proposed that screening start at age 50 instead of 40 and be done every two years instead of annually. While mammograms for women ages 40 to 49 slightly reduce mortality from breast cancer, they also result in significantly more false positives and overtreatment. Thus a woman's decision to get mammograms while she is in this age range involves important trade-offs, and the choice depends on the individual's beliefs and values.
Schwartz argues that clinicians should not always disclose all available quantitative data to all patients. "While the data should always be available to patients who want it, the question is, how to offer it and in what form," he writes. "These issues suggest that much more empirical research and ethical analysis are required about the use of quantitative information in decision-making."
"Questions about how and when to disclose quantitative information will become ever more pressing as advances in epidemiology and genetics provide increasingly precise ways to characterize the risks that patients face and the possible impacts of preventive treatments."
In his commentary, Peter Ubel reports that his studies show that whether decision aids improve patient decisions depends on how it is constructed. For example, pictographs proved better at conveying risks than narrative or other kinds of graphic information. He argues for research into how best to present information about risk to patients so as to aid decision making. "Those of us who care about patient autonomy and informed consent should work to find out what kind and manner of information will be most useful and least biasing to the largest number of people," he concludes.
Source: The Hastings Center