The Christchurch terrorist exploited the extraordinary power of social media to broadcast his message of violence and hate across the world. As he brutally murdered 51 people and wounded 49 more in two mosques in New Zealand, his livestream on Facebook ran uninterrupted—and before it was ultimately pulled down from Facebook, it was quickly posted and shared millions of times across Twitter, Facebook, YouTube and other platforms.
This is the new age of terrorism, defined by the rise of extremist communities online and terrorists who carry out real-life violence inspired by virtual content. And every time a successful terrorist attack is broadcasted online, it risks inspiring copycats attempting to unleash similar terror in their own communities.
We as a nation need to adapt to this new reality. Which is why it is incumbent upon social media and tech companies to step up to the plate and take these threats seriously. Gone are the days of unicorns and The Next Big Thing—tech companies of today have become critical institutions in our society in the same way that banks, telecom and utilities matured in the last century.
Earlier this year, as chairman of the House Homeland Security’s Subcommittee on Counterterrorism, I questioned Facebook, Google, Twitter, and Microsoft (which owns LinkedIn) about the resources they’re devoting to counterterrorism—because budgets reflect values, and if they’re spending more on lobbyists and catering than on combating terrorism, there should be hell to pay.
Unfortunately, the companies did not fully or promptly share this critical information. The little they did provide months later proved the companies are not doing enough. When Facebook, Google, and Twitter testified before Congress last month, I told them that their anemic response to the issue of online terror content is insulting. They oversell the capabilities of artificial intelligence, they undersell the nature of the challenge, and they obscure the amount of resources they’re devoting to fighting online terror content.
They also point to a group that they created in 2017 to pool certain basic counterterrorism data that can be shared, called the Global Internet Forum to Counter Terrorism (GIFCT). The viral spread of the Christchurch video exposed the real limitations of GIFCT’s skeletal consortium—a shoestring operation with no permanent staff, no shared location, and minimal technological and policy collaboration between the companies.
This initial effort, unsurprisingly, has not lived up to its promise. It’s time to start thinking about a newer model for more robust self-regulation and industry-wide cooperation to deal with the metastasizing threat of terrorist content online.
This week, the big tech companies will be meeting in Silicon Valley to discuss the future of GIFCT. Ahead of that much-needed conversation, and after what I have seen in my oversight role in Congress this year, I am urging that GIFCT be transformed.
First, GIFCT must have permanent staff who serve as dedicated points of contact for the companies and law enforcement. Further, moving its operations into a shared physical location, as is done in the military and intelligence community, could help companies stay ahead of online terrorist activity. These are the wealthiest companies in the world. There’s no reason they can’t make serious investments in hiring more counterterrorism staff and build out an infrastructure for timely information sharing among the companies.
Second, GIFCT must develop industry standards on terrorist content removal. How long is too long for terrorist content to remain live online? What error rates are acceptable from machine learning tools targeted at taking down terrorist content? How quickly are users’ reports of terrorist content handled? Having clear standards will not only help social media companies advance the ball together, but also enable lawmakers and the public understand how well the companies are handling terrorist content.
Third, GIFCT must explore opportunities for cooperation beyond simply maintaining a collective database of digital fingerprints that help the companies identify terrorist images and videos after they’ve already gone live. In order to build a truly robust operation, the companies must consider sharing terrorism-related artificial intelligence training data and other technologies between themselves and with smaller, less-resourced social media companies that must have a seat at the table as well.
Finally, GIFCT must foster a culture of transparency. Academics and other experts need appropriate access to removed terrorist content to study trends and better understand terrorist behavior. Outside experts must also have access to this information to audit the companies’ claims of success against their own standards. And the public deserves transparency into the process of content moderation—particularly when issues of public safety and national security are so deeply implicated.
The leaders of GIFCT need to think big to face the uniquely 21st century challenge of preventing terrorist content from being spread online. The largest technology companies have the resources and brainpower to implement these very necessary changes. It is time for the social media companies to transform GIFCT into an organization positioned to address this pressing national security challenge. Lives are on the line.
Congressman Max RoseMax RoseMax Rose preparing for rematch with Nicole Malliotakis: report 'Blue wave' Democrats eye comebacks after losing reelection Overnight Defense: Austin takes helm at Pentagon | COVID-19 briefing part of Day 1 agenda | Outrage over images of National Guard troops in parking garage MORE, who represents Staten Island and South Brooklyn, chairs the House Homeland Security Subcommittee on Intelligence and Counterterrorism.