Facebook, Google and Twitter all indicated this month that they are considering writing policies specifically about deepfake videos after lawmakers voiced concerns over a doctored video of Speaker Nancy PelosiNancy PelosiJudge to hear Trump's case against Jan. 6 committee in November Kamala Harris engages with heckler during New York speech GOP lawmaker calls for Meghan, Harry to lose royal titles over paid leave push MORE (D-Calif.) that was left online.
The trio of top social media companies told Rep. Adam SchiffAdam Bennett SchiffHouse votes to hold Bannon in contempt of Congress The Hill's 12:30 Report - Presented by Altria - Manchin heatedly dismisses rumors of leaving Democratic Party Bannon eyed as key link between White House, Jan. 6 riot MORE (D-Calif.), the chairman of the House Intelligence Committee, that they are looking into whether it's necessary to tweak their policies to account for the rise of deepfakes – videos or images manipulated by AI to make it appear as though people are doing or saying certain things.
The issue clamored into the spotlight earlier this year when Facebook declined to take down a user-posted video of Pelosi that was slowed down and edited to make it appear as though she was slurring her words. The hundreds of comments on the video indicated viewers thought the video showed Pelosi in real time. Shortly after, President TrumpDonald TrumpHillicon Valley — Presented by Xerox — Twitter's algorithm boosts right-leaning content, internal study finds Ohio Democrat calls Vance an 'ass----' over Baldwin tweet Matt Taibbi says Trump's rhetoric caused public perception of US intelligence services to shift MORE shared a video edited to make it seem like Pelosi was stumbling over her words.
The videos of Pelosi were not deepfakes, as they did not edit the content of her remarks, but they reignited a larger conversation about how the social media companies deal with manipulated footage, something that is expected to play a growing role in the upcoming presidential election.
Schiff pressed the companies over the issue in letters on June 15. The responses from the companies are dated July 31.
Twitter's director of public policy, Carlos Monje, wrote the company is "carefully investigating how to handle manipulated media and agree that manipulated media can pose serious threats in certain circumstances."
"The solutions we develop will need to protect the rights of people to engage in parody, satire and political commentary," he added.
He said Twitter has identified two edited videos of Pelosi circulating on its platform.
"The first video, which appears to have been slowed down, received a total of nine Retweets, 17 Likes, and 797 video views as of July 31, 2019," Monje wrote, likely referencing the video that drew thousands of views on Facebook.
"A second video, shared by the account of President Trump, appears to be an edited collection of clips of Speaker Pelosi," he added. "This Tweet received approximately 31,107 Retweets, nearly 96,000 Likes, and approximately 6.37 million video views as of July 31, 2019."
The letters signal the platforms are still weighing how to deal with the issue of videos that have been selectively edited and promoted for political purposes.
"We are always looking into new potential threats related to personal or societal harm arising from new technologies, including this one, and may further update our policies in the future if we identify gaps that are not currently covered by our existing rules or systems," Karan Bhatia, Google's vice president of government affairs and public policy, wrote. He noted that Google-owned YouTube has been working to combat manipulated media since its "early days," pointing to the platform's "deceptive practices" policies as an effort to address the issue.
And Facebook noted that its CEO Mark ZuckerbergMark ZuckerbergHillicon Valley — Presented by Xerox — US cracks down on tools for foreign hacking DC AG adds Facebook's Zuckerberg to Cambridge Analytica suit Senator asks Facebook's Zuckerberg to testify at hearing on kids' safety MORE has publicly said the company may work up a new policy for deepfakes specifically.
"We have recently engaged with more than 50 global experts with technical, policy, media, legal and academic backgrounds to inform our policy development process," Kevin Martin, Facebook's vice president of U.S. public policy, wrote.
Congress has held multiple hearings into the issue of deepfakes and misinformation online this year, and last week, a Senate panel approved bipartisan legislation that would direct the Department of Homeland Security to conduct an annual study of deepfakes and similar content.
Schiff said the letters from Facebook, Google and Twitter "make clear that the platforms have begun thinking seriously about the challenges posed by machine-manipulated media, or deepfakes, but that there is much more work to be done if they are to be prepared for the disruptive effect of this technology in the next election."
"It is incumbent on companies in this space to design polices and technologies that prevent their platforms from being weaponized by malicious actors, whether foreign or domestic, and it’s clear they are far from ready to accomplish that," he said.