Social media bots pose threat ahead of 2020

The spread of disinformation by bots on social media platforms remains a persistent problem ahead of the 2020 election, with experts warning the fake accounts could exploit divisions in American society.

The issue is back in the spotlight after two recent hashtags gained attention on Twitter. One, #MoscowMitch, targeted Senate Majority Leader Mitch McConnell (R-Ky.) for refusing to allow a vote on election security bills. And another, #KamalaHarrisDestroyed, pushed the narrative that Sen. Kamala Harris (D-Calif.) had performed poorly during last week’s Democratic debate.

The threat from bots, which are fake or automated social media accounts, is nothing new, but the hashtags have revived a debate over how prevalent bots actually are. Some warn disinformation from fake accounts is intensifying, while others say those hashtags were promoted by regular Twitter users. And the debate is highlighting how the issue remains a challenge as the next election nears.{mosads}

At least one bot-tracking website found a jump in bot activity tied to one of the hashtags. Bot Sentinel, a free platform designed to track bot activity and untrustworthy accounts on Twitter, listed #KamalaHarrisDestroyed as one of the top 10 hashtags tweeted by bot accounts on the two days after the latest Democratic debate.

Other trending topics tweeted by bot accounts over the past week include #MAGA, referring to President Trump’s “Make America Great Again” slogan, along with #Trump2020 and #QAnon, a reference to a right-wing conspiracy group. 

The Wall Street Journal concluded that “bot-like activity pushed divisive content on race” during the Democratic debates, particularly targeting Harris.

Twitter, though, has pushed back against the idea that the anti-McConnell and Harris hashtags were the result of bots. A spokesperson told The Hill their “initial investigations have not found evidence of bot activity amplifying either of the cited hashtags.”

“These were driven by organic, authentic conversation,” Twitter said. 

Worries about bots exploiting contentious political issues, including on race, are not new. Former special counsel Robert Mueller’s report found that one so-called troll farm, the Russian Internet Research Agency, operated a network of automated Twitter accounts for a disinformation campaign “designed to provoke and amplify political and social discord in the United States.”

Experts say those actions are continuing.

Paul Barrett, the deputy director of the Center for Business and Human Rights at New York University’s Stern School of Business and the author of a recently finished report on disinformation, told The Hill that Russians and other state actors are continuing to “stir dissent and division” on social media. 

“You’ve seen that already for example with Sen. Harris … and the questioning of her racial identity,” Barrett said, referencing tweets in June that falsely claimed the senator, who is of Indian and Jamaican heritage, is not black.

“By picking at that particular issue, people who are trying to sow discord in general are touching on a very sensitive nerve and encouraging acrimony in addition to attacking Sen. Harris in particular.”

The Internet Research Agency was able to reach millions of people ahead of the 2016 vote by creating thousands of fake accounts on Facebook, Instagram and Twitter that spread false news.

In testimony before the House in 2018, Facebook CEO Mark Zuckerberg said his company estimated that around 126 million people may have seen content posted by an Internet Research Agency-backed account during the two years leading up to the 2016 election. Zuckerberg added that it was estimated that around 20 million Instagram users also viewed content from the troll farm’s accounts during that same time period.

Twitter and Facebook, which also owns Instagram, have taken steps to address the problems caused by fake accounts and disinformation. 

“Using technology and human review in concert, we proactively monitor Twitter to identify attempts at platform manipulation and mitigate them,” a Twitter spokesperson said.

Twitter has also tried to be more transparent about the problem, releasing an inventory of all the accounts and related content associated with potential disinformation campaigns the company had found since 2016. 

That included almost 4,000 Internet Research Agency-associated accounts, 770 accounts that potentially originated in Iran, and what Twitter described as “more than 10 million Tweets and more than 2 million images, GIFs, videos, and Periscope broadcasts, including the earliest on-Twitter activity from accounts connected with these campaigns, dating back to 2009.”

Facebook did not respond to a request for comment on this story, but in a blog post published in November 2018, the company touted a partnership with the Atlantic Council’s Digital Forensic Research Lab, which it said provided access to “real-time updates on emerging threats and disinformation campaigns around the world.”

But critics say that companies need to do more.

Barrett acknowledged the difficult situation for social media companies, who are reluctant to take actions seen as impacting free speech. But he said those companies should do more to educate users and the public about bots and fake news on their sites. 

“I think it would be important for the major social media platforms to more prominently and continually warn their users that some of what they are likely to see in their feeds is phony,” Barrett said. “I think there should be regular digital literacy education going on, on the sites themselves in a way that will warn people.”

Niam Yaraghi, a nonresident fellow at the Brookings Institution’s Center for Technology Innovation, has also called for providing users with more information about what they see on social media.

“Unless individuals are presented with counter arguments, falsehoods and hateful ideas will spread easily, as they have in the past when social media did not exist,” Yaraghi wrote in an article published by Brookings in April.

The pressure to do more is likely to intensify ahead of 2020 and as more lawmakers and candidates find themselves on the receiving end of disinformation.

In the upper chamber, Sen. Dianne Feinstein (D-Calif.) has introduced legislation that would prevent campaigns from using bots.

Sen. Marco Rubio (R-Fla.) was the victim of what he described as a “#Putin disinformation campaign” on Twitter in June.

Rubio tweeted out a screenshot of a fake account that used his name and was tweeting that the British planned to use “deepfakes,” or doctored videos, to spy on Trump.

“The image below is fake,” Rubio tweeted. “But #Russia created this realistic looking image & then had it posted online in blogs & fringe news sites.”

And Rubio delivered a warning.

“This is minor compared to what lies ahead for us,” he said.

Tags Dianne Feinstein Donald Trump Marco Rubio Mark Zuckerberg Mitch McConnell Robert Mueller

Most Popular

Load more


See all Video