Twitter is pushing back on a new report from Motherboard indicating the company hesitates to crack down on white supremacist content because Republican lawmakers could be swept up in such a purge.
Motherboard reported earlier Thursday that a Twitter employee at an all-hands meeting last month said the company does not use artificial intelligence (AI) to aggressively take down neo-Nazi content because it could unintentionally remove Republican lawmakers and their supporters from the platform.
Twitter, in a statement to The Hill, said the story has "absolutely no basis in fact" and disputed the "characterization of the exchange at the meeting of March 22."
"There are no simple algorithms that find all abusive content on the Internet and we certainly wouldn't avoid turning them on for political reasons," the Twitter spokesperson said.
The report comes as Twitter users concerned about the spread of white supremacist content call on the company to "ban Nazis" and take stronger action on the spread of misinformation. Last year, the company came under fire for its decision not to ban Alex Jones, saying the conspiracy theorist and InfoWars founder did not violate its rules. It ultimately banned Jones and his affiliated accounts in September.
Critics for years have pointed out that Twitter — along with other social media platforms — has been able to harness the power of AI to proactively remove content from terrorist groups like ISIS and al Qaeda, but it has not employed the same strategy when it comes to neo-Nazis and other white supremacist content.
A Twitter spokesperson said automated tools assisted with 91 percent of the company's terrorism-related suspensions across six months in 2018.
Facebook, Microsoft, Twitter and YouTube formed the Global Internet Forum to Counter Terrorism (GIFCT) in 2017, an initiative aimed at curbing the spread of Islamic terrorist content online.
That coordinated effort resulted in a large reduction of Islamic extremist content, with companies reporting a high success rate in deleting content often before users even see it.
But Twitter, Facebook and Google-owned YouTube have not applied the same aggressive takedown strategy to white supremacist or neo-Nazi content.
And, according to Motherboard's report, the Twitter employee said the company believes a total eradication of white supremacists would take down some Republican lawmakers and their supporters.
Extremism researcher JM Berger told Motherboard that he has found “a very large number of white nationalists identify themselves as avid Trump supporters.”
“Cracking down on white nationalists will therefore involve removing a lot of people who identify to a greater or lesser extent as Trump supporters, and some people in Trump circles and pro-Trump media will certainly seize on this to complain they are being persecuted,” Berger said.
While each of the platforms say violent or hateful content violates their rules, they have not engaged in a coordinated campaign against white extremists in the same way they did against ISIS and al Qaeda.
A growing chorus of conservatives, including President TrumpDonald TrumpTrump takes shot at new GOP candidate in Ohio over Cleveland nickname GOP political operatives indicted over illegal campaign contribution from Russian national in 2016 On The Money — Dems dare GOP to vote for shutdown, default MORE, claim that the world's largest social media companies, including Twitter, have an anti-conservative bias. The companies say there is little evidence for this claim. The president met this week with Twitter CEO Jack Dorsey just hours after lashing out at the company in his own tweets, calling it "very discriminatory."
What if, and stay with me here, they sometimes used human beings to make human judgements? https://t.co/NSJr9t7Oh8— Brian Schatz (@brianschatz) April 25, 2019