Social media companies consider deepfake policy changes after Pelosi video

Social media companies consider deepfake policy changes after Pelosi video
© Getty Images

Facebook, Google and Twitter all indicated this month that they are considering writing policies specifically about deepfake videos after lawmakers voiced concerns over a doctored video of Speaker Nancy PelosiNancy PelosiOvernight Health Care — Presented by Partnership for America's Health Care Future — Four companies reach 0M settlement in opioid lawsuit | Deal opens door to larger settlements | House panel to consider vaping tax | Drug pricing markup tomorrow Schiff punches back after GOP censure resolution fails Trump urges GOP to fight for him MORE (D-Calif.) that was left online.

The trio of top social media companies told Rep. Adam SchiffAdam Bennett SchiffSchiff punches back after GOP censure resolution fails Trump urges GOP to fight for him House rejects GOP measure censuring Schiff MORE (D-Calif.), the chairman of the House Intelligence Committee, that they are looking into whether it's necessary to tweak their policies to account for the rise of deepfakes – videos or images manipulated by AI to make it appear as though people are doing or saying certain things.

ADVERTISEMENT

The issue clamored into the spotlight earlier this year when Facebook declined to take down a user-posted video of Pelosi that was slowed down and edited to make it appear as though she was slurring her words. The hundreds of comments on the video indicated viewers thought the video showed Pelosi in real time. Shortly after, President TrumpDonald John TrumpTrump says he doesn't want NYT in the White House Veterans group backs lawsuits to halt Trump's use of military funding for border wall Schiff punches back after GOP censure resolution fails MORE shared a video edited to make it seem like Pelosi was stumbling over her words.

The videos of Pelosi were not deepfakes, as they did not edit the content of her remarks, but they reignited a larger conversation about how the social media companies deal with manipulated footage, something that is expected to play a growing role in the upcoming presidential election.

Schiff pressed the companies over the issue in letters on June 15. The responses from the companies are dated July 31.

Twitter's director of public policy, Carlos Monje, wrote the company is "carefully investigating how to handle manipulated media and agree that manipulated media can pose serious threats in certain circumstances."

"The solutions we develop will need to protect the rights of people to engage in parody, satire and political commentary," he added.

He said Twitter has identified two edited videos of Pelosi circulating on its platform.

"The first video, which appears to have been slowed down, received a total of nine Retweets, 17 Likes, and 797 video views as of July 31, 2019," Monje wrote, likely referencing the video that drew thousands of views on Facebook.

"A second video, shared by the account of President Trump, appears to be an edited collection of clips of Speaker Pelosi," he added. "This Tweet received approximately 31,107 Retweets, nearly 96,000 Likes, and approximately 6.37 million video views as of July 31, 2019."

The letters signal the platforms are still weighing how to deal with the issue of videos that have been selectively edited and promoted for political purposes.

"We are always looking into new potential threats related to personal or societal harm arising from new technologies, including this one, and may further update our policies in the future if we identify gaps that are not currently covered by our existing rules or systems," Karan Bhatia, Google's vice president of government affairs and public policy, wrote. He noted that Google-owned YouTube has been working to combat manipulated media since its "early days," pointing to the platform's "deceptive practices" policies as an effort to address the issue.

And Facebook noted that its CEO Mark ZuckerbergMark Elliot ZuckerbergOn The Money: Trump dismisses 'phony Emoluments Clause' after Doral criticism | Senate Dems signal support for domestic spending package | House panel to consider vaping tax Hillicon Valley: Facebook removes Russian, Iranian accounts trying to interfere in 2020 | Zuckerberg on public relations blitz | Uncertainty over Huawei ban one month out Facebook to label content from state-run media MORE has publicly said the company may work up a new policy for deepfakes specifically.

"We have recently engaged with more than 50 global experts with technical, policy, media, legal and academic backgrounds to inform our policy development process," Kevin Martin, Facebook's vice president of U.S. public policy, wrote.

Congress has held multiple hearings into the issue of deepfakes and misinformation online this year, and last week, a Senate panel approved bipartisan legislation that would direct the Department of Homeland Security to conduct an annual study of deepfakes and similar content.

Schiff said the letters from Facebook, Google and Twitter "make clear that the platforms have begun thinking seriously about the challenges posed by machine-manipulated media, or deepfakes, but that there is much more work to be done if they are to be prepared for the disruptive effect of this technology in the next election."

"It is incumbent on companies in this space to design polices and technologies that prevent their platforms from being weaponized by malicious actors, whether foreign or domestic, and it’s clear they are far from ready to accomplish that," he said.