COVID-19 misinformation is a public health hazard — we need to start treating it as such
Across the nation, discussions about COVID-19 vaccines are unfolding everywhere, in doctors’ offices, at the barber shop, in churches and — of course — on social media. Although vaccination reduces the risk of severe illness or death significantly, misinformation (twisted facts) and disinformation (deliberate lies) have become almost ubiquitous in conversations about the vaccines, as millions of people are exposed to messages that are created to confuse and sow distrust. The tactics are working: More than half of Americans say that they either believe, or aren’t sure whether to believe, some of these myths and lies.
It’s a devastating trend that causes real harm. People who believe disinformation about vaccines are less likely to get vaccinated. So, although mis- and disinformation are not the only reason for low vaccine confidence, they clearly play a key role in the stalled U.S. vaccination rollout. And with an abundance of shots available but still only 49 percent of the population fully vaccinated, the conditions were ripe for the delta variant to take hold. We would not be seeing this dramatic rise in cases and hospitalizations were more people vaccinated. And were there less misinformation, more folks would have received their shots.
Moreover, the onslaught of vaccine misinformation is no accident. It is the result of sophisticated campaigns designed to reach into people’s minds and influence people’s behavior.
Yes, anti-vaccine groups have been around for decades, but today’s vaccine misinformation and disinformation campaigns are something else entirely. They come from a diverse set of bad faith actors ranging from anti-vaccine groups, to foreign governments, to elected officials, to hucksters of natural remedies. They are organized, intentional, well-funded and aptly apply the latest science on which messages and images speak to the identity, values and concerns of specific groups. They are designed to exploit our fears — about infertility or rushed science or adverse reactions to medicines — and to purposefully target those most at risk from this virus, including pregnant women, Black Americans — who have well-founded reasons to mistrust the medical system; parents who want the best for their children and low-wage workers who worry about getting sick from a vaccine.
It is not that certain groups are more susceptible to misinformation. It’s that they are intentionally targeted by lies. Lies that can rupture their families and cost them their lives.
So, what is the fix? As with any public health challenge, the first step is to label and measure it, and then to develop informed responses.
Except, researchers currently have very little visibility into how exactly platforms program their algorithms to push content in front of people, and who sees which posts. When Surgeon General Vivek Murthy warned of the health hazards of misinformation last week, some were quick to question just how widespread the problem truly was. Because platforms such as Facebook, Youtube, Instagram, TikTok and others don’t share such data, we don’t fully know. Scientists and public health experts are largely unable to assess the health hazards on a large scale.
The few examples we do have prove both the prevalence of false, toxic information and show that large audiences have been exposed. Small insights into social media posting patterns reveal that “bots” (automated social media profiles) are omnipresent — particularly when it comes to sharing bad takes on dubious facts. More transparency from social media platforms would allow us to identify the scale of weaponized vaccine disinformation and its impact on people and their vaccine decisions. We need to understand not just how these campaigns are designed, but also who shares them, how they are discussed, and how they infiltrate public consciousness.
In parallel, we have to design and test interventions — not just counter campaigns, but actual programs that help us all resist the lure of lies.
On an individual level, education on the tactics of disinformation has been shown to be effective in increasing people’s ability to recognize — and stop sharing — false and misleading posts and images. But we also need to empower people to know how to counteract misinformation. We are asked almost daily, “how do I talk to my patient, my mother, my friend?” The American public deserves guidance, delivered in culturally competent ways. A program for suburban moms will be different from a program for urban youth. Both are necessary.
On a community basis, collaborations with “trusted messengers” who can answer people’s questions are one important tool for breaking through the barrage of misleading information. Our work and others’ shows that community-led communication efforts are critical to upending mistrust and disarming disinformation.
Finally, structural changes are needed. When technology companies have limited the frequency of automated postings, it has had real effects on the spread of lies about a variety of topics, including vaccines. Reworking algorithms so they do not amplify falsehoods, removing disinformation and labeling false content and making it easier for people to report misinformation are also effective strategies platforms should implement to make online spaces safer.
Social media is not all bad. We know that — when used well — it can increase resilience, wellbeing, connectedness, health knowledge and even healthy behavior. We can utilize and foster these positive uses while we investigate, combat and prevent the more harmful uses. In the end, this is not about the technology, but how we design and apply it in our community spaces.
Looking at the rapid drop in the rate of vaccinations across the country — and the corresponding increase in infections and harm in unvaccinated communities — we have a choice to make. We can continue to pretend that vaccine misinformation is inevitable or inconsequential — or we can start acknowledging that it plays a key role in vaccine confidence in this pandemic, and start treating it as the preventable public health hazard that it is.
Megan Ranney, MD, MPH, is a practicing emergency physician, director of the Brown-Lifespan Center for Digital Health, and associate dean at Brown University School of Public Health. Follow her on Twitter: @meganranney
Stefanie Friedhoff is a veteran journalist, professor of the practice in health services, policy and practice as well as strategy director at Brown University School of Public Health. Follow her on Twitter: @Stef_Friedhoff
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.