Malicious Bots Undermine Public Health Communication During COVID-19

Malicious Bots Undermine Public Health Communication During COVID-19 | The Lifesciences Magazine

Bots and Disinformation Surge Amid Pandemic Response in Finland

During the COVID-19 pandemic, the public information landscape in Finland saw unprecedented challenges as disinformation reached new heights. With an urgent demand for accurate health information, official sources aimed to provide clarity amid a rapidly changing situation. However, social media added significant complexity to this effort. A recent study found that malicious bots—automated programs designed to mimic human users—became particularly aggressive during critical pandemic moments, such as vaccine rollouts and major health advisories. The study analyzed 1.7 million tweets related to COVID-19 on Twitter/X in Finland over three years, revealing that bots contributed 22% of these messages, a marked increase from the usual 11% bot presence. Of the bot accounts identified, over a third (36%) engaged in malicious activity, spreading misinformation and fueling vaccine skepticism. Approximately 460,000 of the tweets contained inaccurate information, with a similar portion expressing negative views toward vaccinations.

Bots Exploit Public Health Channels to Amplify Misinformation

According to the study, malicious bots specifically targeted official communication channels, such as the Finnish Institute for Health and Welfare’s (THL) Twitter, to spread misleading information without directly attacking THL. By tagging other accounts in 94% of their tweets, these bots amplified their reach, skillfully adapting their content to evolving COVID-19 circumstances. This tactic enabled them to infiltrate conversations and sow doubt in a variety of online discussions, particularly regarding public health measures and vaccines. The research deployed the Botometer 4.0, an advanced tool that distinguished between benign bots and those actively propagating COVID-19-related falsehoods, underscoring that traditional bot detection methods might overlook the nuances of malicious bot behavior. The study calls for a reassessment of bot detection and containment approaches, emphasizing the ongoing threat these bots pose even beyond the pandemic’s peak.

Long-term Implications and Unique Non-English Research Insights

The study further suggests that malicious bots could have lasting effects on public trust in health institutions and the overall effectiveness of health communication. Lead researcher Tuukka Tammi from THL emphasized the need for enhanced monitoring, public education on bot activity, and collaboration with social media platforms to reduce the spread of disinformation. He advocates for preemptive measures such as improved detection tools and strategic responses from public health agencies. Notably, the research stands out for its focus on a non-English language, investigating bots within the Finnish social media landscape. This unique setting offers insights into the broader impacts of language, geography, and population diversity on public health communication and disinformation tactics. Professor Nitin Sawhney from Aalto University noted the significance of this study in highlighting the dual role of bots—some of which support health efforts, while others endanger public trust. The findings pave the way for future research and strategies aimed at mitigating misinformation, which remains a critical challenge in the digital era.

Share Now

LinkedIn
Twitter
Facebook
Reddit
Pinterest