Social media companies consider deepfake policy changes after Pelosi video

Social media companies consider deepfake policy changes after Pelosi video
© Getty Images

Facebook, Google and Twitter all indicated this month that they are considering writing policies specifically about deepfake videos after lawmakers voiced concerns over a doctored video of Speaker Nancy PelosiNancy PelosiAirline industry applauds Democrats for including aid in coronavirus relief package Democrats unveil scaled-down .2T coronavirus relief package Trump tax reveal roils presidential race MORE (D-Calif.) that was left online.

The trio of top social media companies told Rep. Adam SchiffAdam Bennett SchiffSchiff to subpoena top DHS official, alleges whistleblower deposition is being stonewalled Schiff claims DHS is blocking whistleblower's access to records before testimony GOP lawmakers distance themselves from Trump comments on transfer of power MORE (D-Calif.), the chairman of the House Intelligence Committee, that they are looking into whether it's necessary to tweak their policies to account for the rise of deepfakes – videos or images manipulated by AI to make it appear as though people are doing or saying certain things.


The issue clamored into the spotlight earlier this year when Facebook declined to take down a user-posted video of Pelosi that was slowed down and edited to make it appear as though she was slurring her words. The hundreds of comments on the video indicated viewers thought the video showed Pelosi in real time. Shortly after, President TrumpDonald John TrumpCensus Bureau intends to wrap up count on Oct. 5 despite judge's order Top House Republican calls for probe of source of NYT Trump tax documents New Yorkers report receiving ballots with wrong name, voter addresses MORE shared a video edited to make it seem like Pelosi was stumbling over her words.

The videos of Pelosi were not deepfakes, as they did not edit the content of her remarks, but they reignited a larger conversation about how the social media companies deal with manipulated footage, something that is expected to play a growing role in the upcoming presidential election.

Schiff pressed the companies over the issue in letters on June 15. The responses from the companies are dated July 31.

Twitter's director of public policy, Carlos Monje, wrote the company is "carefully investigating how to handle manipulated media and agree that manipulated media can pose serious threats in certain circumstances."

"The solutions we develop will need to protect the rights of people to engage in parody, satire and political commentary," he added.

He said Twitter has identified two edited videos of Pelosi circulating on its platform.

"The first video, which appears to have been slowed down, received a total of nine Retweets, 17 Likes, and 797 video views as of July 31, 2019," Monje wrote, likely referencing the video that drew thousands of views on Facebook.

"A second video, shared by the account of President Trump, appears to be an edited collection of clips of Speaker Pelosi," he added. "This Tweet received approximately 31,107 Retweets, nearly 96,000 Likes, and approximately 6.37 million video views as of July 31, 2019."

The letters signal the platforms are still weighing how to deal with the issue of videos that have been selectively edited and promoted for political purposes.

"We are always looking into new potential threats related to personal or societal harm arising from new technologies, including this one, and may further update our policies in the future if we identify gaps that are not currently covered by our existing rules or systems," Karan Bhatia, Google's vice president of government affairs and public policy, wrote. He noted that Google-owned YouTube has been working to combat manipulated media since its "early days," pointing to the platform's "deceptive practices" policies as an effort to address the issue.

And Facebook noted that its CEO Mark ZuckerbergMark Elliot ZuckerbergHillicon Valley: Productivity, fatigue, cybersecurity emerge as top concerns amid pandemic | Facebook critics launch alternative oversight board | Google to temporarily bar election ads after polls close Conservative groups seek to block Facebook election grants in four swing states: report Facebook critics launch alternative oversight board MORE has publicly said the company may work up a new policy for deepfakes specifically.

"We have recently engaged with more than 50 global experts with technical, policy, media, legal and academic backgrounds to inform our policy development process," Kevin Martin, Facebook's vice president of U.S. public policy, wrote.

Congress has held multiple hearings into the issue of deepfakes and misinformation online this year, and last week, a Senate panel approved bipartisan legislation that would direct the Department of Homeland Security to conduct an annual study of deepfakes and similar content.

Schiff said the letters from Facebook, Google and Twitter "make clear that the platforms have begun thinking seriously about the challenges posed by machine-manipulated media, or deepfakes, but that there is much more work to be done if they are to be prepared for the disruptive effect of this technology in the next election."

"It is incumbent on companies in this space to design polices and technologies that prevent their platforms from being weaponized by malicious actors, whether foreign or domestic, and it’s clear they are far from ready to accomplish that," he said.