Who's watching Big Tech? The dangers of using algorithms to fight extremism

Who's watching Big Tech? The dangers of using algorithms to fight extremism
© Getty Images

Fifteen years ago, in the wake of the subway bombings in London, the United Kingdom was awakened to the danger of domestic terrorism and scrambled to address it. By August 2005, within a month of the attacks, a Preventing Extremism Together Task Force was convened, and by October 2006 the Preventing Violent Extremism Pathfinder Fund was created, providing the equivalent of some $8.5 million to 70 municipalities to enhance partnerships among law enforcement and faith communities. Funding escalated in the coming years so that by 2009, nearly $200 million had been  allocated to the so-called “Prevent strategy.”   

There was only one problem with the approach. In its early iteration it didn’t work, and it may have compounded the problem.   

That history counsels caution today when, in the wake of the Jan. 6 attack on the Capitol, our government has awakened belatedly to the danger of domestic extremism and has pledged tens of millions of dollars to arrest its spread. Although varying approaches are being proffered to the Biden administration, they are unified by identifying as a principal source of the problem the role of social media platforms’ algorithms in organizing and amplifying extremist messaging.   

ADVERTISEMENT

It is the latest unintended but inevitable consequence of the application of commercial algorithms — designed to reinforce consumer preferences for marketing purposes — to political speech.  The same algorithm that tells you that if you like toothpaste you might also like mouthwash will, in a political context, tell you that if you’re concerned with Second Amendment rights you might also be worried about race relations … and here’s a group of like-minded folks you just might want to join. And, by the way, this group loves Hawaiian shirts and here is a great selection for you to buy. 

Sound crazy? It’s exactly what happened as Facebook’s algorithms helped to organize  apocalyptic militia cults such as the Boogaloo, whose adherents would end up assassinating law enforcement officials while burning down police precincts. Twitter’s algorithms, meanwhile, were indispensable in mainstreaming QAnon, whose adherents were among those who stormed the Capitol.   

Given their unwitting complicity in the growth of the problem, what role should social media platforms play in shaping its solution?   

We have been down this road time and again, as the corrosive effects of social media have become apparent. Summoned like schoolchildren to the principal’s office to testify before Congress, the CEOs invariably argue that they are doing better and really trying and, anyway, its complicated. The platforms hire marketing firms or consultants or ally with opportunistic NGOs to announce they are doing better with a splashy campaign. Meanwhile, the process of reform remains opaque.   

Nowhere is this process more apparent than in the events surrounding Jan. 6.  In effect, in the runup to the Capitol riot, the platforms monetized and incubated threats against democracy. In its aftermath, the platforms determined which threats to deplatform and silence, and when. The people who had no voice in those decisions are notable: the representative institutions of our democracy, which continue to face direct attack from the terror nurtured by social media. 

ADVERTISEMENT

Now, with the federal government poised to spend tens of millions of dollars to counter violent extremism, support seems to be coalescing around strategies that purport to exploit the same data compiled by surveillance capitalism to “redirect” people known to harbor extremist views to cul de sacs of benign content. Vidhya Ramalingam, the CEO of a U.K.-based company, Moonshot CVE, which is funded by Facebook and Google, among others, describes the approach in an interview with The Hill: “Technology can actually have the power to scale up really deeply personal interactions the same way that every single advertisement we see is personalized toward me, my gender, my behavior online, my identity, where I live. It really is literally the same thing that Coca-Cola is doing to sell us more Coke.”  

The appeal of such an approach to social media platforms is obvious; far from disrupting a business model that has proven both lucrative and deadly, it doubles down on it. It enables the platforms to maintain their posture as concerned corporate citizens while their algorithms continue to polarize the public as an externality of their business practice. But that’s precisely the point: The polarization driven by the engine of social media’s application of commercial algorithms to political speech leads ineluctably to domestic extremism. We should be skeptical of any approach that fails to address the underlying structural issue but in fact endorses it; its solutions may be, at best, a Band-Aid where a tourniquet is needed.    

Moreover, it may be susceptible to the same unintended consequences that have beset the platforms themselves. Researchers at the Network Contagion Research Institute — with which the center I direct at Rutgers has partnered in issuing reports over the past year on the Boogaloo, QAnon and other extremist movements — were surprised to discover that among the people to whom right-wing extremists were being “redirected” was a left-wing extremist, whose views are openly anti-Semitic, who advocates violence against law enforcement, and who was convicted federally, along with several Russian nationals, for transporting migrants illegally from central and eastern Europe to Florida. This was brought to the attention of Moonshot, and people no longer are being redirected to this person.     

But given previous failures of countering violent extremism and the history of the unintended consequences of social media, the administration and Congress should insist upon answers to the following questions before endorsing any strategy: Who is watching this process? Who decides who an extremist is? What biases might they have? Who determines what “re-education material” is appropriate, and how can we be certain it works? With specific respect to the left-wing extremist in question, how was he selected to redirect extremists? Was this failure an unintended consequence of the application of redirection algorithms? If so, what adjustments have been made to the algorithms? What steps have been taken to recontact and reassess the people who were referred to him?   

Interviewed recently in Fast Company, Ms. Ramalingam stated: “As long as Nike and Coca-Cola are able to use personal data to understand how best to sell us Coke and sneakers, I’m quite comfortable using personal data to make sure that I can try and convince people not to do violent things.” The demonstrated polarizing effect of applying such commercial algorithms to political speech, however, should give us pause. As Ms. Ramalingam added: “Should that system of influence exist at all? I’m totally up for that debate.” 

Here’s hoping that Congress and the administration are up for it as well. 

John Farmer Jr. is director of the Eagleton Institute of Politics at Rutgers University. He is a former assistant U.S. attorney, counsel to the governor of New Jersey, New Jersey attorney general, senior counsel to the 9/11 Commission, dean of Rutgers Law School, and executive vice president and general counsel of Rutgers University.