Internet companies right to close neo-Nazi sites, but terror still too easy to find


In the aftermath of Charlottesville, several technology companies including Cloudfare, GoDaddy and removed the white supremacist websites The Daily Stormer and Vanguard America from their servers.

This action ignited debate over how companies should tackle bigotry, hate and violence on the Internet. While these companies should be applauded for their intent, significant questions remain over the seemingly arbitrary and ad-hoc process by which they made their decisions.

{mosads}The Daily Stormer and Vanguard America’s roles in the Charlottesville riots and endorsement of racist and anti-Semitic views cannot be understated. The Daily Stormer live blogged the event and posted photos and content that may have incited violence, including quotes such as “We have an army” and “This is the beginning of a war.” Vanguard America was one of the groups involved in the weekend’s planning.  Even before the clash, the two were well known for their anti-Semitic, racist, and violent views. 

Action against these sites should have been taken as part of regular company protocol, as opposed to being undertaken only after the tragic and troubling events at Charlottesville. 

A variety of justifications were given for taking The Daily Stormer and Vanguard America down. GoDaddy’s CEO stated they have a “responsibility” to take down speech that “(incites) violence.” did not cite a reason, but their own website states they “don’t allow websites of known terrorist groups or genuine calls for violence.” Cloudfare’s CEO said The Daily Stormer was “vile” by “any reasonable standard.”  

Considering Cloudfare, GoDaddy and all continue to provide services to websites supporting or acting on behalf of terrorist groups, like the Taliban and others, it appears these companies apply their own standards in an arbitrarily manner.

Cloudfare houses, a Taliban-affiliated website. GoDaddy hosts the Al-Omgy Brothers Money Exchange, which the U.S. sanctioned for providing financial services to al-Qaeda. provides website services to platforms affiliated with ISIS. These terrorist organizations, by any standard, incite violence and worse, yet are inexplicably left online. 

Tech companies do not have a systematic, transparent and well-reasoned approach to removing terror and radical content online.  Devoid of any such system, terrorist propaganda videos and social media accounts persist. Vicious acts of terror continue to be promoted to innocent and impressionable people without rebuke. As it stands, terror organizations have free reign to disseminate their methods of hatred and violence, and these companies are providing them the platforms to do so.

Major terrorist attacks both domestically in Orlando and Fort Hood, and internationally in Nice and Paris were all inspired by radical content found on the Internet. Orlando shooter Omar Mateen searched for content from ISIS leader Abu Bakr al-Baghdadi on Facebook. Fort Hood Army psychiatrist Nidal Malik Hasan “self-radicalized” on the Internet.  Nice attacker Lahouaiej Bouhlel underwent a “rapid transformation” after he became “suddenly enthralled with extremist messages and ultra-violent images.”  Paris attacker Farid Ikken “radicalized himself through the Internet” and was found with a computer containing “extremist propaganda.” The list goes on and on.

In the aftermath of such attacks, tech companies have a moral obligation to do what they claim: Remove terror content found online.  If we want the Internet to continue as a dynamic platform for thoughtful expression, the current arbitrary and ad-hoc decision-making process cannot stand and services enabling murderers must cease to exist.  

To avoid future terror attacks, tech companies must institute a clear and consistently-applied framework that brings transparency to content regulation. These frameworks need to go beyond the boilerplate lines embedded in Terms of Service. They need to create a systematic, even-handed and transparent process for reviewing extremist content.

We absolutely can and should take down material that incites violence and fuels hate. The need for systems that foster due process while also protecting from abuses — whether tech giants or extremist groups — have never been greater. 

Hany Farid is a senior advisor to the Counter Extremism Project (CEP) and Albert Bradley 1915 Third Century Professor and Chair of Computer Science at Dartmouth College. Farid, the world’s leading authority on digital forensics and hashing technology, along with CEP, created eGLYPH, new technology which detects known extremist images, video, and audio files for immediate and accurate removal.

The views expressed by contributors are their own and not the views of The Hill.


The Hill has removed its comment section, as there are many other forums for readers to participate in the conversation. We invite you to join the discussion on Facebook and Twitter.

See all Hill.TV See all Video

Most Popular

Load more


See all Video