National Security

Deepfake videos may have unwitting ally in US media

Deepfake videos are likely to pose a grave threat to the 2020 election, unless the media adopts stringent policies to distinguish real videos from sophisticated forgeries, experts say.

“The press is going to have to resist the urge to get the scoop by talking about something that may not be true before they can validate it,” said Amy Zegart, co-director of the Center for International Security and Cooperation at Stanford University.

“That’s going to require some technical skills and it’s going to require some patience,” she continued. “And that’s a hard thing, given the pressure for the news media to be first to the story.”{mosads}

Whether the press will be willing or able to do that in a competitive 24/7 news cycle that rewards breaking news is an open question.

But journalism ethics and advocacy groups say the media will have to contain their competitive jockeying as deepfakes grow more prevalent and realistic. Some warn of a repeat of the harmful disinformation campaigns from 2016 if the press and public are not cautious. 

“There’s this old adage of: It’s better to be right than to be first, but that often gets lost in the heat of a competitive moment,” said Kathleen Culver, director for the Center for Journalism Ethics at the University of Wisconsin-Madison. “[But] you know that these technologies exist to make sophisticated deepfakes and you know that people are motivated to get those scripts out there to pollute our information environments, so there’s no better time than now to slow down.” 

Detecting forgeries is not a new challenge for newsrooms. In the internet age, newsrooms have scrutinized images and videos to determine whether they are authentic or fake. But deepfakes will be a more difficult challenge, particularly because artificial intelligence makes the authenticity of such videos indistinguishable to the human eye and forgeries harder to detect.

“All news outlets need to be aware of just exactly how convincing deepfakes are. We’ve been wrestling with forged photos for quite some time, images that have been manipulated in one way or another,” Culver said. “I don’t think a lot of people were prepared for how quickly that was going to change over to video.”

It is unclear if newsrooms are prepared to confront the technological advances in deceptive content ahead of 2020. CNN, Fox News and MSNBC did not reply to requests for comment when asked about their processes and procedures for determining whether a video is a deepfake.

Some experts doubt the media is prepared, pointing to the rapid advancements in deepfake technology, the uncharted legal terrain for determining whether a video falls under protected speech and satire or is disinformation, and technology companies’ uncertainty over how to police the issue on their platforms.

“I think they are not at all prepared to deal with this for a whole variety of reasons,” said Amy Zegart, a senior fellow at the Hoover Institution. “These policy teams are having to grapple with where the boundaries are and what the policy should be in real time, so they’re making monumental decisions that affect our political life.”

Industry experts have proposed a range of possible solutions, including newsrooms improving their in-house technical skills to determine whether a video is real or fake or having companies subscribe to a service that specializes in verifying videos.

“I could see either a for-profit or not-for-profit group that … might sell their services, sort of like an Associated Press news service or wire service to multiple newsrooms, and there actually are already some things like that,” said J. Alex Tarquinio, national president of the Society of Professional Journalists.

Tarquinio emphasized that she “doesn’t want to discount the newsroom solution” of having in-house experts, as many outlets already do, but noted newsrooms will need to adapt their skills to match the rapidly changing technology. 

And while experts agree newsrooms both large and small will be increasingly confronted with the problem of deepfakes, they also say the onus doesn’t just fall on journalists alone to combat disinformation. 

“There’s always been deception in politics, not least of all by some of the candidates themselves. But using deceptive videos on the internet to cast a politician in an unflattering light, that is something new. And it’s something that the news media, the social media platforms, and viewers need to guard against,” Tarquinio told The Hill. “They need to use common sense.”

In late May, President Trump shared a heavily edited video of House Speaker Nancy Pelosi (Calif.) that appeared to show the Democratic leader slurring her words, prompting backlash from critics who said the president shared a clearly misleading video to millions on Twitter.

The warnings about spreading disinformation also come as experts remain skeptical on whether the news media and the American public have learned from the 2016 presidential election, in which Russia used the U.S. media as an unwitting ally in its efforts to interfere and sow discord.

“I’m concerned that the press has not done a reflective lessons-learned exercise about its role in amplifying all sorts of false messages, whether it’s deepfakes or just plain falsehoods,” said Zegart. “That amplification role can very, very powerful.”

According to former special counsel Robert Mueller’s report, some outlets reported on false claims pushed by Russian intelligence officers under the DCLeaks moniker, which sought to push stolen materials from the Kremlin’s hack of Clinton campaign materials.

In particular, some journalists reported on false information that Russian officers intentionally peddled to try and divert scrutiny for the Democratic National Committee hack.

At the time, some newsrooms grappled with whether to cover the hacked materials once they became public, but as then-candidate Trump touted their contents, media outlets began to cover them.

Some news outlets also included tweets from Kremlin-linked Twitter accounts in their stories as indicative of public sentiment on politically divisive topics, which they “attributed … to the reactions of real U.S. persons,” according to the report.

“There was evidence presented in the Mueller report about how often news organizations had quoted fake Russian Twitter accounts in news stories as if they were actual sources,” Culver told The Hill. “They got faked, they got faked hard. And so we need to now be looking at the kinds of processes and what kinds of partnerships they could develop so that they’re not going to get faked again.”

Experts say a single viral deepfake video could have much more impact in 2020.

“Deepfakes, and misinformation in general, will become increasingly more difficult to control because of the role of social media in seeding fake news, followed by fast-moving media outlets that then promote content without proper vetting, and finally powerful voices amplifying this content,” said Hany Farid, a digital forensics expert at the University of California, Berkeley. “Enabled by technology, driven by profits, and motivated by personal ideology, fake news can go from 0 to 100 in a matter of hours.”

The edited Pelosi video served as warning of what could come. 

While the video was not a deepfake, it was edited in a way that aimed to hurt Pelosi’s reputation by making her appear drunk or sluggish. And while the news media correctly covered the video as being edited, Facebook refused to take the post down, arguing that being “false” isn’t enough for removal.

The issue of deepfakes has gained increased attention in the press and on Capitol Hill, particularly after top U.S. intelligence officials including outgoing Director of National Intelligence Dan Coats testified before Congress earlier this year that hostile foreign actors are expected to weaponize deepfakes to sow discord.

The House Intelligence Committee held the first hearing focusing specifically on the threat of deepfakes and other developing technologies in June.

Since then, several bills have surfaced to combat the problem, including bipartisan legislation which would require the Department of Homeland Security to conduct an annual study of deepfakes and propose changes or new regulations around the AI technologies used to create the videos.

Technology giants are also examining how to address the problem. Earlier this month, Facebook, Google and Twitter indicated they are considering writing policies specifically about deepfake videos after the Pelosi video surfaced.

And the press, too, has stepped up its coverage of deepfakes, sounding the alarm about their potential threat. An international group of press freedom leaders pledged in April to commit themselves to serve as watchdogs and fight against disinformation in the 2020 election.

But it is too early to tell whether efforts to combat misleading deepfakes will be able to outpace the rapidly advancing technology, particularly if malicious actors seek to weaponize them.

“Right now, the Russians or any foreign government can reach deep inside the populace of another country and wage massive deception with video that convinces millions of people that a leader is saying one thing as opposed to another,” Zegart warned.

She expects such technology to proliferate ahead of 2020. 

“It is sort of a mass propaganda weapon in a way that technology has never been used.”

Tags Dan Coats Deepfake Disinformation Donald Trump Nancy Pelosi Robert Mueller
See all Hill.TV See all Video