3 ways government can help clean up Twitter
Twitter has earned a lot of positive press, and praise from both sides of the political aisle, for announcing that it will no longer allow political ads. Unfortunately, the announcement was just a diversionary tactic, not a commitment to fix the many ways in which the site is undermining our political discourse, safety and democratic health.
Consider: Twitter is still rife with bots spreading fake news and hate speech, the problem that took much of the country by surprise in 2016 and raised significant questions about possible manipulation of that year’s presidential election. Likewise, Twitter has taken little meaningful action to implement safety protocols and protect users against violent threats.
We should welcome the fact that Twitter has taken steps to distance itself from Facebook, its rival social media giant, which is perfectly comfortable running political ads even if what the ads say is provably false. But it’s important to understand that paid advertising is just a tiny part of the Twitterverse — we’re talking about less than $3 million in political ad revenue for a company valued at $23 billion. Meanwhile, much larger and more complex problems unique to Twitter are going unaddressed.
Those problems date back much further than 2016. When I worked on Barack Obama’s first presidential campaign, I had the privilege of registering his Twitter account. That was all the way back in March 2007, and we faced many of the same problems that plague high-profile users today: A proliferation of fake Barack Obama accounts that muddied the messages we were putting out, and an avalanche of anonymous hate speech and death threats directed at our campaign.
I spent countless hours pushing Twitter to do something about these glaring obstacles to healthy public discourse. But the company never took meaningful action. It was more worried about hitting growth targets ahead of its next round of venture capital funding. Twitter didn’t want to be seen as a media outlet, just a gateway application.
It’s been much the same story since. Twitter has spent billions of dollars on fixing its databasing systems and bandwidth. But when it comes to putting an end to bullying bots or a commitment to the safety of its users, it has been an embarrassment and a failure.
Twitter will tell you it has done plenty to address these problems. It announced the removal of 70 million Russian bots in July 2018 and another nine million three months later. That certainly sounded great. But it overlooked the fact that, at the time, two-thirds of all tweeted links were being disseminated by bots. Twitter’s answer was just a drop in the bucket; a PR pitch not a solution.
If Twitter had really been interested in a solution, it would have taken more than a piecemeal approach; it would have announced it was building algorithms to detect spam messages, troll farms, bullying swarms and other threats of violence as they arise — not after the fact. We’re still waiting.
There were similar problems with another announcement, earlier this year, that in the wake of mass slaughters in Pittsburgh (at a synagogue) and Christchurch (at two mosques) Twitter would remove tweets that targeted religious groups. They also announced new policies for world leaders and threatened action against tweets that violate their terms of service. Again, these new policies sounded great. But the company stuttered and backtracked on them a few months later. Most glaringly, it did not take action against President Trump, even when he seemed to be stirring up hatred and inciting violence.
It doesn’t have to be this way. When in 2007 CBS News noticed that articles about Obama posted on its website were attracting racist vitriol and death threats in the comment section, it killed the comment section. There’s no reason on Earth why Facebook and Twitter couldn’t have taken a similar zero-tolerance approach long, long ago.
Instead, Twitter keeps playing whack-a-mole and making press announcements instead of putting in real fixes. Since the company is manifestly incapable of fixing itself, it is time for the government to intervene and impose regulations that clean up the public square without impinging on free speech.
What might that regulation look like? Well, first Congress could start by fixing the 1996 Communications Decency Act, which exempted internet providers from any liability for the things their users say online. We can’t let social media companies hide behind the current rules and neglect their customers’ safety.
Second, we should treat social media companies as news publishers and hold them to the same standards, with legal exposure for any material they publish that is manifestly malicious or false. CBS or The New York Times would never publish a story accusing Hillary Clinton of running a child sex ring out of a Washington D.C. pizza parlor because it is obviously libelous. Why should social media sites be able to disseminate the same story without consequences?
Third, we should get much more aggressive about policing bots. The California legislature took a step in this direction earlier this year with SB 1001 The Bolstering Online Transparency (BOT) Bill. But we need to go further — holding not just bot developers responsible but going after the platforms that disseminate their content.
Twitter is now 13 years old, and like all teenagers, it needs to be held accountable. Either the company should demonstrate a real commitment to change, as it has repeatedly promised, or it needs to be grounded.
Scott Goodstein was external online director for Barack Obama’s 2008 campaign in charge of the campaign’s social media platforms, mobile technology and lifestyle marketing. He was a lead digital strategist on the Bernie Sanders’ 2016 campaign and is the founder of CatalystCampaigns.com.