Twitter researching how to deal with white supremacists on the platform: report

Officials at Twitter are reportedly conducting in-house research on white nationalists using its platform in order to determine whether or not to ban them entirely.

Vice News reported Wednesday that external experts are working with the company to help Twitter decide whether to ban users who express racist views from the platform entirely or instead combat hateful speech with opposing views.

{mosads}The site’s head of trust and safety, legal and public policy told Vice that the company is examining whether presenting “counter-speech” was more effective as a tool of deradicalization than simply removing offending users.

“Counter-speech and conversation are a force for good, and they can act as a basis for de-radicalization, and we’ve seen that happen on other platforms, anecdotally,” Vijaya Gadde told the news outlet.

“We’re working with [external researchers] specifically on white nationalism and white supremacy and radicalization online and understanding the drivers of those things; what role can a platform like Twitter play in either making that worse or making that better?” Gadde continued.

“Is it the right approach to deplatform these individuals?” she added. “Is the right approach to try and engage with these individuals? How should we be thinking about this? What actually works?”

A Twitter spokesperson told The Hill in an email that the work with external researchers was not new, and would not stop the company from taking steps every day to stop hateful conduct.

“A research project like this isn’t new; our work with academics is ongoing and something that is a critical part of building effective policies,” the spokesperson said. “But conducting research does not prevent us from taking steps to improve our service. We’ve made great strides in creating stronger policies against hateful conduct, violent extremist groups and violent threats on Twitter.”

“By working together on these challenges, our policy, enforcement and product teams have been able to make important improvements like our shift to proactive enforcement, which now results in around 40% of Tweets actioned for abuse flagged to our team without people on Twitter having to file reports,” the spokesperson added.

“We will always have more to do, and collaboration with outside researchers is critical to helping us effectively address issues like radicalization in all its forms.”

Twitter’s consideration of different responses to white nationalist or white supremacist content comes as rival Facebook announced a total ban on such posts earlier this year in an effort to fight hate groups using the platform.

“Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and separatism,” the company said in March.

Vice reported earlier this year that the company has struggled with using artificial intelligence to automatically screen content on the platform for white nationalist content because the AI software often swept up content from Republican politicians as well.

A spokesperson for the company strongly denied those claims, writing that they had “no basis in fact.”

“The information cited from the ‘sources’ in this story has absolutely no basis in fact,” said the Twitter spokesperson. “There are no simple algorithms that find all abusive content on the Internet and we certainly wouldn’t avoid turning them on for political reasons.”

See all Hill.TV See all Video