As misinformation surges, coronavirus poses AI challenge

As misinformation surges, coronavirus poses AI challenge
© istock

Major social media platforms are relying on artificial intelligence more than ever to moderate content as the coronavirus pandemic keeps human reviewers at home, raising new challenges for efforts to prevent misinformation online.

Companies have used AI to police their networks and aid in content moderation before, but experts say that the increased reliance on automated tools during the current public health crisis poses a new test. And they warn that the stakes are as high as ever at a time when spreading authoritative information — and curbing misinformation — can save lives.

“There’s essential information about the virus that needs to get disseminated,” Eric Goldman, a law professor at Santa Clara University, told The Hill. 

ADVERTISEMENT

“There’s also bogus information that’s causing extraordinary harm, and the machines are not going to be able to easily sort between those two,” he said.

The pandemic has forced companies to quickly change how they oversee content on their platforms. In the past month, Facebook, YouTube and Twitter have all said that they will be decreasing their use of human moderators to review content.

Mark ZuckerbergMark Elliot ZuckerbergHillicon Valley: Senate panel advances bill targeting online child sexual abuse | Trump administration awards tech group contract to build 'virtual' wall | Advocacy groups urge Congress to ban facial recognition technologies Facebook to launch Fourth of July voter registration drive Hillicon Valley: Facebook claims it 'does not profit from hate' in open letter | Analysis finds most of Facebook's top advertisers have not joined boycott | Research finds Uighurs targeted by Chinese spyware as part of surveillance campaign MORE told reporters that the nearly 15,000 contract workers employed by Facebook for content moderation would be working from home, where they would not have access to the mental health services and support necessary to review sensitive content safely.

Some of the work on that kind of content, which involves child exploitation, terrorism and self-harm, will be transferred to full-time employees, the CEO said. But the workforce issues will mean more reliance on machine moderation.

That poses the risk of errors, such as when content that is not problematic is accidentally flagged or cases where troublesome content is not quickly detected.

YouTube also announced in a blog post that it would lean more heavily on automated systems to keep employees out of offices.

The video giant said that users may see an increase in video removals, even if videos do not violate policies. While creators can appeal removals, YouTube warned that process will also be slowed down.

ADVERTISEMENT

Twitter also said it would be taking similar steps, adding that because automated systems can make more mistakes, it will not be permanently suspending any accounts solely based on the software.

It will also be prioritizing human moderators for the “most impactful content.”

The shift to automated content moderation is happening at the same time that misinformation about the novel coronavirus surges on social media platforms.

Fake treatments, faulty information about the spread of the pandemic and people seeking to profiteer off the public panic are abundant on platforms, although the companies have taken steps to combat it.

The three major platforms have made efforts to promote content from authoritative sources, bury conspiracy theories and limit coronavirus related ads.

But survey data suggests those policies are still coming up short.

American adults who use social media as their primary platform to get news are more likely to say they’ve seen some misinformation about the pandemic and are less likely to answer a question about when a vaccine may become available than those who primarily use other mediums, according to survey conducted by the Pew Research Center this month.

“It’s very hard to use automation when you’re in an information environment where the information is always incomplete and changing,” Robyn Caplan, a researcher at the nonprofit Data & Society, told The Hill, about how shifting to automated content moderation may make the problem of misinformation worse.

Caplan said the coronavirus posed a unique challenge for companies that must now rely on automated moderation with such a fluid and complex issue. “Algorithms require clear answers ... we’re starting to have more good information come, but still time will tell whether those predictions are going to bear out correctly.”

Goldman, from Santa Clara University, agreed. He said that balancing between ensuring that information got out quickly and preventing false claims would be even harder with human moderators reducing their role.

“But we need to have those interventions, otherwise people are going to die,” he said.

Another unique problem for moderators is the proliferation of hate speech related to the spread of the virus, which originated in the Hubei province in China.

According to a report released last week by Israeli artificial intelligence company L1ght, anti-Chinese hate speech on Twitter has increased 900 percent during the pandemic.

“According to our data, racist abuse is being targeted most explicitly against Asians, including Asian Americans,” the report reads. “Toxic tweets are using explicit language to accuse Asians of carrying the coronavirus and blaming people of Asian origin as a collective for spreading the virus.”

ADVERTISEMENT

The report also found a general spike in hate speech as people spend more time online while at home. Experts said that even before the pandemic, AI technology struggled to correctly identify hate speech online.

Caplan and Goldman both said machine systems often fail to grasp context, which is essential for picking up on racist, sexist and homophobic attacks, especially in different regions and languages that automated programs might have not have as much data to rely on.

Caplan explained that automated systems are normally best when confronted with clear yes or no questions, while “gray-area issues” such as hate speech often require human intervention.

According to Facebook’s most recent transparency report, artificial intelligence often flags less than 20 percent of hate speech on the platform, relying on users to report the rest.

Video also presents challenges for all three platforms when relying more on automated moderation, largely because “speech to text is not a solved problem,” according to Arun Gandhi of Nanonets, a company that sells AI moderating tools to companies. Because there are still issues in correctly transcribing voices, videos with problematic content could be missed by AI, while innocuous content could be flagged incorrectly.

Caplan noted that YouTube uses automated moderation for determining if videos should be allowed to monetize, a process that often leaves content that deals with political subjects unable to run ads. 

The Hill reached out to Facebook, Twitter and YouTube for comment. The companits directed The Hill to their previous announcements about enlisting AI.

ADVERTISEMENT

Gandhi said that, overall, the biggest social media platforms will be better positioned to function without human content moderators because their systems are trained on the most data and the companies have budgets to run large machines.

But with the growing reliance on AI, even a few mistakes could have significant ripple effects, Goldman cautioned.

“At the scale of Google, Facebook and Twitter, even a 1 percent error rate creates a lot of friction,” he said.

“That’s going to be a high volume of mistakes and it’s going to ... disrupt legitimate conversation.”