Senators urge social media companies to take action against 'deepfake' videos

Sens. Mark WarnerMark Robert WarnerGOP chairman vows to protect whistleblowers following Vindman retirement over 'bullying' Senators press IRS chief on stimulus check pitfalls Hillicon Valley: Facebook takes down 'boogaloo' network after pressure | Election security measure pulled from Senate bill | FCC officially designating Huawei, ZTE as threats MORE (D-Va.) and Marco RubioMarco Antonio RubioGOP chairman vows to protect whistleblowers following Vindman retirement over 'bullying' Lincoln Project offers list of GOP senators who 'protect' Trump in new ad GOP Miami mayor does not commit to voting for Trump MORE (R-Fla.) on Wednesday urged major social media platforms to create policies and standards to combat the spread of “deepfake” videos, citing the potential threat to democracy. 

Warner and Rubio sent identical letters detailing their concerns to Facebook, Twitter, TikTok, YouTube, Reddit, LinkedIn, Tumblr, Snapchat, Imgur, Pinterest and Twitch. Deepfake videos are those created using artificial intelligence to alter or fabricate images and manipulate the meaning of the video. 


“Given your company’s role as an online media platform, it will be on the front lines in detecting deepfakes, and determining how to handle the publicity surrounding them,” the senators wrote. “We believe it is vital that your organization have plans in place to address the attempted use of these technologies. We urge you to develop industry standards for sharing, removing, archiving, and confronting the sharing of synthetic content.”

The senators criticized the companies for only making “limited progress” in establishing policies around deepfake videos “despite numerous conversations, meetings, and public testimony acknowledging your responsibilities to the public.”

They also cited concerns around deepfakes being “corrosive” to public trust. 

“As concerning as deepfakes and other multimedia manipulation techniques are for the subjects whose actions are falsely portrayed, deepfakes pose an especially grave threat to the public’s trust in the information it consumes; particularly images, and video and audio recordings posted online,” the senators wrote. “If the public can no longer trust recorded events or images, it will have a corrosive impact on our democracy.”

The senators asked that the companies answer questions about current policies in regards to deepfakes, whether the companies have the technology needed to identify deepfakes and how the companies plan to notify victims of deepfake videos. 

Facebook in September published a blog post detailing its steps to identify deepfake videos, including investing $10 million to further research the issue. 

This is not the first time concerns around deepfakes have been raised by Congress. In June, the House Intelligence Committee examined the issue, but members struggled to reach a consensus over how to combat the videos. 

Last week, the House Science, Space and Technology Committee also examined the threat of deepfakes in regards to online disinformation, and separately approved legislation to increase research into developing tools to combat deepfakes. 

Deepfake videos were in the spotlight in May when a video that was slowed down to make House Speaker Nancy PelosiNancy PelosiAs coronavirus surges, Trump tries to dismantle healthcare for millions Sunday shows preview: Coronavirus poses questions about school safety; Trump commutes Roger Stone sentence Pelosi plans legislation to limit pardons, commutations after Roger Stone move MORE (D-Calif.) appear to be intoxicated went viral on social media. Facebook refused to take down the video, but did not recommend it on its news feed.