When people process information, they develop unconscious strategies – or biases – that simplify their decisions

Research into how people make decisions while under pressure could help the U.S. military improve training for its leaders and lead to better decision-support systems. Studies have shown that when people process information, they develop unconscious strategies – or biases – that simplify their decisions.

Research into how people make decisions while under pressure could help the U.S. military improve training for its leaders and lead to better decision-support systems. Studies have shown that when people process information, they develop unconscious strategies – or biases – that simplify their decisions.

Now, research at the Georgia Tech Research Institute (GTRI) is revealing how these biases affect people when they're dealing with lots of information – and little time to form conclusions.

"The immediate application for this research is to develop training programs to improve decision-making," said Dennis Folds, a principal research scientist in GTRI's Electronic Systems Laboratory. "Yet our findings could also help design new types of decision-support systems." The research indicated that nine different kinds of biases can lead to errors in judgment when people are dealing with a lot of information. Meanwhile, the error rate was not as high as researchers expected for individuals under time pressure. Also, the study revealed that subjects who were trained to spot conditions that lead to decision-making biases were better at detecting "false-alarm opportunities."

The Army Research Institute funded Folds to conduct a series of experiments that combined a high volume of data with time pressures. The experiments simulated the changing reality of military decision-makers. Commanders today communicate more directly with field personnel. The amount and variety of information at their disposal has escalated with sources ranging from real-time sensors and voice communications to archived data. The result can be ambiguous, disjointed information rather than integrated, organized reports.

"This puts far greater pressure on leaders, who must make faster decisions while sifting through more data," Folds noted. In his experiments, he considered previous research on seven specific biases that affect individuals who must wrestle with large amounts of data:

  • Absence of evidence. Missing, relevant information is not properly considered.
  • Availability. Recent events or well-known conjecture provide convenient explanations.
  • Oversensitivity to consistency. People give more weight to multiple reports of information, even if the data came from the same source.
  • Persistence of discredited information. Information once deemed relevant continues to influence even after it has been discredited.
  • Randomness. People perceive a causal relationship when two or more events share some similarity, although the events aren't related.
  • Sample size. Evidence from small samples is seen as having the same significance as larger samples.
  • Vividness. When people perceive information directly, it has greater impact than information they receive secondhand -- even if the secondhand information has more substance.

To test the affects of these biases, Folds had experiment subjects view an inbox on a computer screen containing a variety of text messages, maps, photographs and video and audio recordings. Subjects (the majority being Georgia Tech ROTC students) were instructed to report certain military situations, such as incidents of sniper fire or acts of suspected sabotage. They were not to report other events, such as normal accidents in an urban area unrelated to enemy activity.

To decide whether or not an event should be reported, subjects reviewed a series of messages that contained both bona fide evidence as well as information created to trigger the biases that cause poor decisions. In each trial, subjects were allowed enough time to spend an average of 20 seconds per element data plus one additional minute for reporting; they were also asked to attach information that supported their decision.

In the first experiment, all seven biases appeared with the greatest number of errors caused by vividness and oversensitivity to consistency. In addition, Folds discovered two new biases that can hinder the quality of rapid decisions:

  • Superficial similarity. Evidence is considered relevant because of some superficial attribute, such as a key word in a message title. For example, a hostage situation might have been reported earlier, and then another message shows up in the inbox with the word "hostage" in its header, although the message's actual content has nothing to do with hostages.
  • Sensationalist appeal. Items containing exaggerated claims or threats influence a decision-maker even when there is no substance to the content.

Folds was surprised at how well subjects could perform the task while under pressure, he said. Although he expected an accuracy rate of about 50 percent, subjects correctly reported 70 percent of incidents.

In a second experiment, researchers divided subjects into two groups, using one as a control group while training the other group how to spot conditions that spark decision-making biases. Subjects who received training were able to detect about twice as many "false-alarm opportunities" as the control group.

The biggest difference between the two groups involved "persistence of discredited information" and "small sample" biases. Forty-eight percent of trained subjects were able to recognize when a "persistence" bias existed compared to 18 percent of the control group. Fifty percent of trained subjects caught the "sample-size" traps versus 11 percent of the control group. Although training helped participants recognize when traps existed, it didn't help them identify the specific bias. "When subjects were under pressure to make decisions rapidly, the distinctiveness of the categories fell apart," Folds explained. "That's significant, because it helps us tailor training efforts."

The experiments also revealed what kind of information is meaningful to decision-makers, Folds noted. Software designed especially for the trials tracks when subjects open a document for the first time and when they go back for a second time or third look. The amount of time that subjects spend reviewing data – along with the data they attach to reports showed a decided preference for text messages over other formats. Folds' team is conducting more research: Two new sets of trials are examining how decision-making errors occur in groups, while another experiment is trying to pinpoint how rapidly individuals can make good decisions.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Sapio Sciences Collaborates with Waters Corporation to Improve Efficiency of Laboratory Operations