TikTok, Facebook failed to remove ads spreading election misinformation: report
TikTok and Facebook failed to remove ads spreading election misinformation researchers submitted to test leading platforms’ election-related policies, according to a report released Friday.
TikTok, which prohibits all political ads under its policy, fared the worst, failing to block all but two of the 20ads tested by the research team, according to the report released by Global Witness and the Cyber Security for Democracy team at NYU.
Facebook blocked 13 of the 20 ads tested — 10 in English and 10 in Spanish — approving a total of seven, when posted from the U.S. When posted from accounts made in the U.K., two days before the U.S. test, just five of the ads were approved, according to the report.
When tested on YouTube, all of the ads were rejected and the dummy YouTube channel created to host the ads was banned, according to the report.
“So much of the public conversation about elections happens now on Facebook, YouTube, and TikTok. Disinformation has a major impact on our elections, core to our democratic system,” Laura Edelson, co-director of the Cyber Security for Democracy team, said in a statement.
“YouTube’s performance in our experiment demonstrates that detecting damaging election disinformation isn’t impossible. But all the platforms we studied should have gotten an ‘A’ on this assignment. We call on Facebook and TikTok to do better: stop bad information about elections before it gets to voters,” Edelson added.
The ads posted by dummy accounts in the investigation on U.S. election misinformation enforcement spread false claims including false information about where and when to vote, as well as about methods of voting. The same ads were submitted to all three platforms.
By submitting scheduled ads, the researchers were able to remove them before they went live if approved.
In response to the report’s findings, a TikTok spokesperson pointed to the platform’s ban on political advertising but did not directly address the report’s findings.
“TikTok is a place for authentic and entertaining content which is why we prohibit and remove election misinformation and paid political advertising from our platform. We value feedback from NGOs, academics, and other experts which helps us continually strengthen our processes and policies,” the TikTok spokesperson said in a statement.
A Meta spokesperson said the report is based on a “small sample of ads” and is “not representative given the number of political ads we review daily across the world.”
“Our ads review process has several layers of analysis and detection, both before and after an ad goes live. We invest significant resources to protect elections from our industry-leading transparency efforts to our enforcement of strict protocols on ads about social issues, elections, or politics — and we will continue to do so,” the spokesperson said.
Although YouTube did not approve the ads tested for the report, a similar experiment Global Witness conducted in August about election misinformation in Brazil found YouTube approved all of those ads spreading misinformation about the Brazilian election. The report urges YouTube to “ensure that its efforts to prevent election disinformation are rolled out globally” to better prevent election disinformation in other countries.
“We know how important it is to protect our users from this type of abuse – particularly ahead of major elections like those in the United States and Brazil – and we continue to invest in and improve our enforcement systems to better detect and remove this content,” Michael Aciman, a spokesperson for YouTube parent company Google, said in a statement.