Russia’s war shines a light on social media’s inconsistencies
Russia’s invasion of Ukraine has sparked an information war that has turned social media into a key battlefield, placing a spotlight on inconsistencies in how tech platforms respond to life-or-death crises.
The handling of posts about the war has shown how content moderation policies can turn on a dime during a crisis, and forced social media companies to take divisive positions on what speech is allowable during war times.
Even before Russian troops moved across eastern Ukraine in late February, Russian President Vladimir Putin’s government was flooding channels with information casting Ukraine as an aggressor and pedaling the narrative that the country needed to be denazified.
Ukraine has also taken advantage of digital communications to build up broad support.
“Social media has been absolutely instrumental for the Ukrainian government,” Emerson Brooking, resident senior fellow at the Atlantic Council’s Digital Forensic Research Lab, told The Hill. “Their ability to draw international attention and galvanize Western action in those first days was extraordinary, and I think it contributed significantly to their ability to resist now.”
Faced with the prospect of both sides in the war seeking to control online narratives, social media companies have sprung into action.
Almost all Western platforms have taken steps to reduce the reach of Russian state-funded media such as RT and Sputnik. Those decisions were unsurprising, given both widespread calls from world leaders to deplatform Russian outlets and the platforms’ histories of labeling state media.
Other content moderation decisions have had less precedent behind them.
The decision by Meta — the newly formed parent company of Facebook, Instagram and WhatsApp — to allow some calls for violence against Russian troops has stood out.
The announcement, first reported by Reuters, drew harsh criticism. Civil rights groups chided Meta for allowing speech that could exacerbate the conflict while pointing to the platform’s role in inciting voting against Rohingya Muslims in Myanmar.
Some experts, including Brooking, have defended allowing calls for violence against Russian invaders.
“The actual Meta decision was acquiescence to reality,” he explained.
“My sense was that the policy guidance we saw was essentially writing down what has been a standard policy,” Brooking added, pointing to how Facebook handled posts during the 2020 war between Armenia and Azerbaijan.
However, the way Meta publicly touted the policy tweak and subsequently narrowed allowable posts complicated the perception of the decision.
“By announcing temporary exemptions you’re just going to raise yet more questions,” Paul Barrett, deputy director of the NYU Stern Center for Business and Human Rights, told The Hill. “Maybe they should’ve tried to muddle through … and discriminated in house between plausible credible threats against a particular person and you know, ‘death to Putin.’ ”
Meta has also drawn scrutiny for allowing praise of the Ukrainian neo-Nazi military unit the Azov Battalion, discussion of which was previously banned under Facebook’s Dangerous Individuals and Organizations policy.
“For the time being, we are making a narrow exception for praise of the Azov Regiment strictly in the context of defending Ukraine, or in their role as part of the Ukraine National Guard,” a Meta spokesperson said. “But we are continuing to ban all hate speech, hate symbolism, praise of violence, generic praise, support, or representation of the Azov Regiment, and any other content that violates our community standards.”
Twitter has also made content moderation calls tied to the war that have raised eyebrows.
The platform determined that a tweet from Sen. Lindsey Graham (R-S.C.) encouraging someone in Russia to “take out” Putin does not run afoul of its rules at this time, according to a Twitter spokesperson.
The war has even compelled TikTok, which has been loath to publicly moderate content at all, to block users in Russia from posting videos in response to the country’s “fake news” law.
Russia’s invasion of Ukraine, according to Barrett, has left social media companies “struggling to keep up with the situation and at times showing a great deal of uncertainty and or clumsiness.”
The willingness of platforms to make those calls shows just how pliable their rules can be in some crisis situations while remaining relatively inflexible in others.
Critics have argued there is a double standard in how willing some tech companies have been to take action in this conflict while allowing, for example, misinformation about the COVID-19 pandemic to spread on their sites with minimal intervention.
“The last few weeks have set, I think, dramatic new precedents for technology policy and content moderation policy,” Brooking said. “It’s unclear yet where things are going to land … my hope is that so long as we agree that it’s good that the precedent being set here might be applied in the future to shield users in other contexts.”