Why we should not want Facebook, or any online platform, to ‘save’ us from Alex Jones

Why we should not want Facebook, or any online platform, to ‘save’ us from Alex Jones
© Screenshot

Recently, YouTube, Apple, Facebook and Spotify removed content posted by conspiracy theorist Alex Jones and the far-right site Infowars, citing hate speech and the platforms’ “community standards” as reasoning for their decision. Many applaud this action; what could be wrong, one might ask, with censoring content or people online who violate “community standards” with hate-filled diatribe?    

Nobody is against setting community standards that are designed to protect us. Sure, there might be a few casualties along the way — just ask the conservative duo Diamond and Silk, who were tagged by Facebook as especially dangerous to this standard. But won’t we be much better off with Alex Jones removed from the internet?

Paternalism — the idea that we’re protecting people from things we consider to be harmful — always seems like a good idea, but oftentimes it goes terribly wrong. Even if it seems we should be in favor of banning Alex Jones from our social media platforms, it is bad idea — and I say this certainly not because I support him or the ideas he spews.


Let me explain. Mark ZuckerbergMark Elliot ZuckerbergFacebook executive hosted Kavanaugh confirmation celebration Hillicon Valley: Facebook rift over exec's support for Kavanaugh | Dem worried about Russian trolls jumping into Kavanaugh debate | China pushes back on Pence House Democrat questions big tech on possible foreign influence in Kavanaugh debate MORE can do whatever he wants with Facebook and we are free to decide whether to use the platform or not. He has no obligation to protect First Amendment rights or guarantee them to us. Nonetheless, some of us really applaud the efforts to police community standards on the social media platform. So what’s wrong with a little paternalism?  

The problem is that paternalism says a lot about the people who are being protected. Protection is for the vulnerable. Protection is for those who do not sufficiently appreciate the harm they face, or fully know the interest they should pursue. Historically, encroachment on individual liberty often was justified in terms of the welfare, needs, interests or good of the persons subject to it, but it always has been the individual limited by paternalism who is being judged. As the late Harvard professor Judith Shklar points out in “The Faces of Injustice” (page 119):

“Paternalism is usually faulted for limiting our freedom by forcing us to act for our own good.  It is also, and possibly more significantly, unjust and bound to arouse a sense of injustice. Paternalistic laws may have as much consent as any other, but what makes their implementation objectionable is the refusal to explain to their purported beneficiaries why they must alter their conduct or comply with protective regulations. People are assumed to be incompetent without any proof.”

The real problem is that paternalism always is at odds with individual liberty because it substitutes the judgment of the individual with that of someone else. Consider that John Stuart Mill warned in “On Liberty” that “with respect to his own feelings and circumstances, the most ordinary man or woman has means of knowledge immeasurably surpassing those that can be possessed by anyone else.”

But in the internet age, the paternalism we are seeing is presented as benign, because it is information we’re controlling and not people. Simply remove the misinformation and police Facebook according to community standards and we’re all good, right? Not exactly, because there are judgments being made about us.  

If we ask Mark Zuckerberg to protect people from information, we are essentially removing the ability of individuals to judge for themselves, and this is more of a statement about us than it is about the content. Of course, those demanding that content be limited on social media actually see this kind of paternalism as a desirable end because some people might fall for “fake news,” Russian propaganda, or even believe the hate spewed by Alex Jones.

But what kinds of judgments are we making about those who we are trying to protect? In his book, “Antisocial Media,” for example, Siva Vaidhyanathan argues that it was African-Americans and young women who were lured into not voting for Hillary ClintonHillary Diane Rodham ClintonHillary Clinton on if Bill should’ve resigned over Lewinsky scandal: ‘Absolutely not’ Electoral battle for Hispanics intensifies in Florida Trump adds campaign stops for Senate candidates in Montana, Arizona, Nevada MORE because of social media advertising supported by Facebook. Young women “fell” for ads that depicted Bill ClintonWilliam (Bill) Jefferson ClintonHillary Clinton on if Bill should’ve resigned over Lewinsky scandal: ‘Absolutely not’ Nikki Haley achieved historic accomplishments, just like the many women in Trump's administration The Hill's Morning Report — Presented by PhRMA — Dem victories in `18 will not calm party turbulence MORE as a womanizer, while African-Americans were fed a diet of ads that depicted some of the insulting rhetoric Hillary Clinton had used to describe criminals in years past. Vaidhyanathan would like Facebook to make changes to the kind of content that is available. Implicitly, however, he is judging the people he believes were not able to make a sound judgment regarding content on Facebook.

We should not fool ourselves into thinking that paternalistic policing of content on social media is only about the content; it is also about us, or some of us, and it speaks to the unfair assumptions of incompetence that Mill warned us about when it comes to individual liberty.     Banning Alex Jones may make us feel as though we are protecting the community, but we are actually saying that some individuals should not exercise their judgment — and this is a dangerous step when it comes to a community standard that has anything to do with liberty.

Lisa S. Nelson is an associate professor at the University of Pittsburgh Graduate School of Public and International Affairs and an affiliate scholar of Pitt Cyber. The author of the forthcoming book,“Social Media and Morality: Losing our Self Control” (Cambridge University Press), she was a co‐principal investigator on a National Science Foundation grant to explore the societal perceptions of biometric technology. From 2011-2013 she was appointed to the Department of Homeland Security’s Data Privacy and Integrity Committee.