While not a new problem, the spread of misinformation throughout the coronavirus disease 2019 (COVID-19) pandemic has exploded on almost every social media platform. Recently, a group of researchers analyzed conversations on public Facebook groups to explored automated misinformation on mask wearing. The team published their findings as a research letter in the JAMA Internal Medicine journal.
How the study was conducted
Despite the known dangers associated with the spread of misinformation online, particularly when it involves COVID-19, little research has been conducted on the software that allows for counterfeit accounts, or ‘bots,’ to amplify misinformation. Typically, this software allows individuals to generate inaccurate content and share this misinformation through the bots.
In an effort to understand how automated software is used to disseminate misinformation, a group of scientists led by Dr. John W. Ayers at the University of California San Diego analyzed conversations on public Facebook groups, as Facebook is regarded as the most susceptible platform for the spread of automated misinformation. More specifically, the researchers were concerned with misinformation that has been spread following the publication of the Danish Study to Assess Face Masks for the Protection Against COVID-19 Infection (DANMASK-19). DANMASK-19 was chosen because it was the fifth most shared research article of all time as of March 2021.
In their work, a total of 563 different Facebook groups were studied, in which a link to the DANMASK-19 publication on the Annals of International Medicine website was posted and downloaded. The study period was limited to the five days following the publication of DANMASK-19, which was between November 18, 2020, through November 22, 2020, as the greatest interest of the study was generally identified to be during this period.
The determination of whether automated software was used or not was based on the posting of identical links in closed succession. The identification of Facebook groups that are the most or least vulnerable to automated misinformation posts was based on calculating the frequency that identical links were posted to pairs of Facebook groups and the time that elapsed between these posts for all links. The Facebook groups that were most affected by automation had a mean (SD) of 4.28 (3.02) seconds between shares of identical links, which was compared with 4.35 (11.71) hours for those that were found to be the least affected by automation.
What type of misinformation was spread?
A total of 712 posts that linked the DANMASK-19 study within the aforementioned five-day period were shared to 563 different public Facebook groups. Of these, 39%, or a total of 279 posts, were found to be most affected by automation. Of these 279 posts, 17 were deleted and unavailable for further analysis.
The two main types of misinformation that were spread throughout these Facebook groups on the DANMASK-19 study included claims that the primary conclusion of DANMASK-19 was misrepresented or that conspiratorial claims were made about DANMASK-19.
An example in which the primary conclusion of DANMASK-19 was misrepresented claimed that mask-wearing harms the wearer. To this end, automated bots would share captions like “It appears that not only does wearing a mask not provide meaningful protection against SARS-CoV-2, but also leads to an increase in infections with other respiratory viruses.” These types of misrepresented claims accounted for 19.8% of the posts that were made to groups most affected by automation.
Comparatively, some of the different conspiratorial claims that were shared by these bots included ‘Corporate fact-checkers are lying to you! All this to serve their Dystopian #Agenda2030 propaganda!!’ and ‘All controlled by politicians, preferring to impose their behavior in all public spaces…[These] are scientists paid by world elites to shamelessly lie to billions of people!’ These conspiratorial claims accounted for almost 51% of all posts made to the Facebook groups that were most affected by automated misinformation posts.
Study takeaways
It should be noted that the entities that are responsible for sharing these automated misinformation posts could not be determined by the current study. Furthermore, only public Facebook groups were studied here over a period of a few days.
The researchers of the current study recommend that federal entities should pass legislation that penalizes the entities that utilize automated software to share misinformation. Furthermore, social media companies, particularly Facebook, should enforce their publication rules to prohibit the publication of automated misinformation. In addition to these two recommendations, counter campaigns to address false claims that are widely spread over social media should be publicly addressed and rebutted by health experts.
Journal reference: