Study suggests guidelines to enhance content of educational videos on chronic conditions

Many people with chronic health conditions search social media, including YouTube videos, to learn more about how to manage their diagnoses. But these videos differ in how well they communicate information and hold viewers' attention. A better understanding of how patients engage with medical information is important for improving the use of health care resources and the quality of care. A new study sought to understand how people engage with health information in YouTube videos on diabetes. In the study, researchers developed an approach to identify videos with differing levels of medical information and examined viewers' engagement with those videos.

The study was conducted by researchers at Carnegie Mellon University, the University of Utah, the University of Arizona, and Michigan State University. It appears in MIS Quarterly.

Our study helps health care practitioners and policymakers understand how users engage with medical information in video format. It also contributes to enhancing current public health practices by promoting the development of guidelines for the content of educational videos that aim to help people cope with chronic conditions."

Rema Padman, professor of management science and healthcare informatics at Carnegie Mellon University's Heinz College, coauthor of the study

Few studies have looked at how videos help patients retrieve medical information to manage chronic conditions. In this study, researchers examined how users engaged with medical information in YouTube videos on diabetes; they chose diabetes because it is among the most prevalent chronic diseases in the United States. The researchers collected 19,873 unique YouTube videos using more than 200 search terms. The videos were produced by individual users as well as health care organizations, such as the Mayo Clinic, the American Diabetes Association, and the American Nutrition Association.

The researchers used a deep learning method (a technique that teaches computers to do what comes naturally to humans, as is used in driverless cars) to identify medical terms in videos, then classified videos based on how much medical information they contained. The researchers also looked at different ways the videos presented information, including via text and images. "Applications of new deep learning methods perform better than conventional machine learning methods," explains Xiao Liu, assistant professor of operations and information systems at the University of Utah, who coauthored the study. "They also contribute to the robustness and rigor of our research."

Next, the researchers analyzed the data in the videos to identify how viewers' engagement varied based on the medical information in the videos. Instead of focusing on how each individual viewer engaged with the videos, the researchers examined how viewers collectively paid attention to the videos in different ways, according to Anjana Susarla, associate professor of accounting and information systems at Michigan State University, another coauthor. They found that some viewers were not engaged, some were engaged selectively in ways driven by their attention, and some were engaged in a sustained manner, also in ways driven by attention.

The study found that viewers who watched YouTube videos with limited medical information (e.g., videos that had unsubstantiated claims or a lot of ads) typically did not engage with the videos, indicating that some medical content is needed to trigger viewers' engagement. At the same time, viewers who watched YouTube videos with a large number of medical terms struggled to maintain attention, suggesting that health care professionals need to have a nuanced understanding of what drives patients' engagement with health information. Given the low levels of health literacy in the United States, this could be the result of viewers being intimidated by the information or not understanding the medical terminology used in the videos, the authors suggest.

Explains co-author Bin Zhang, assistant professor of management information systems at the University of Arizona: "Our results point to a health-literacy divide in online users since more sophisticated users are more likely to use medical terms in their searches for videos, and are likely to be engaged with videos that include relevant content."

Based on their findings, the researchers suggest that specific guidelines should be developed for individuals and organizations that create content for YouTube videos so they can produce engaging and relevant materials for patients. The researchers recommend using a method called automated video retrieval, which identifies and labels videos that have low vs. high levels of content to accommodate patients' varying levels of understanding of medical information and engagement.

"As organizations produce health-related educational materials for patients, they should think not only about what medical information to deliver, but also how to meet the interest, information needs, and health-literacy levels of the consumers," suggests Padman. "Creators of these materials should use technology and online solutions to reach patients with complex chronic conditions with personalized, contextualized, and just-in-time content."

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of News Medical.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
Study finds health care evaluations of large language models lacking in real patient data and bias assessment