Health workers warn social media misinformation is threatening lives
Over 100 doctors and nurses working on the front lines of the coronavirus pandemic sent letters to the largest social media platforms this week calling on them to tackle medical misinformation more aggressively.
The letter sent to the CEOs of Facebook, Twitter, Google and YouTube — and republished as an ad in The New York Times Thursday — warns misleading info about the disease is threatening lives.
“Stories claiming cocaine is a cure, or that COVID-19 was developed as a biological weapon by China or the US, have spread faster than the virus itself,” the health practitioners wrote in collaboration with the nonprofit activist group Avaaz.
“Tech companies have tried to act, taking down certain content when it is flagged, and allowing the World Health Organization to run free ads. But their efforts are far from enough,” the letter says.
Conspiracy theories and unfounded claims about the virus, its origins and ways to combat it have surged on social media in step with the disease itself, causing what WHO has branded an “infodemic.”
Social media platforms have taken several steps to limit the spread of harmful content about the virus while also elevating authoritative information.
The doctors and nurses said the companies should go further and retroactively notify users when they have been exposed to health misinformation.
Facebook rolled out a similar feature last month, but it does not specify what posts contained the fact-checked information.
“Even as nurses, doctors, public officials and all of us work to battle COVID-19, misinformation kills,” Abdul El-Sayed, a Michigan epidemiologist and former Democratic gubernatorial candidate, said in a statement.
“We need to put an end to medical misinformation, and that starts with social media platforms doing the obvious, responsible thing and notifying users with corrections when they have been misinformed. It can save lives,” he said.
The letter also calls for the social media giants to make changes to their algorithms to remove harmful content from recommendations.