Lawmakers voice skepticism over Facebook’s deepfake ban
Facebook’s global policy chief faced tough questions before a House panel on Wednesday as lawmakers voiced skepticism over the company’s efforts to crack down on manipulated videos known as deepfakes ahead of the 2020 elections.
The social media platform unveiled plans to ban such videos late Monday night, but critics quickly condemned the policy for not going far enough.
Under Facebook’s new guidelines, the video of Speaker Nancy Pelosi (D-Calif.) edited to make her appear intoxicated that went viral last year and the video of former Vice President Joe Biden cut to show him touting white nationalist views would not be taken down.
Rep. Jan Schakowsky (D-Ill.) opened the hearing of the House Energy and Commerce Subcommittee on Consumer Protection and Commerce by slamming Facebook’s new policy.
“Big Tech failed to respond to the grave threats posed by deepfakes, as evidenced by Facebook scrambling to announce a new policy that strikes me as wholly inadequate,” Schakowsky, the subcommittee chairwoman, said, noting that the video of Pelosi has already been viewed millions of times.
Facebook’s new policy bans videos that have been “edited or synthesized” by technology like artificial intelligence in a way that is not “apparent to an average person.”
The social media giant’s vice president of global policy management, Monika Bickert, stressed during Wednesday’s testimony that the new rule is an addition to a broad set of existing community standards intended to target disinformation.
Schakowsky pressed Bickert on whether the new policy would cover the edited Pelosi video.
“It would not fall under that policy, but it would still be subject to our other policies that address misinformation,” Bickert explained.
Democrats on the committee suggested that more stringent regulations would be necessary to deal with deepfakes, but some of their colleagues across the aisle expressed concern about stifling innovation and expression with new regulations.
“Deepfakes and disinformation can be handled with innovation and empowering people with more information,” subcommittee ranking member Cathy McMorris Rodgers (R-Wash.) said in her opening remarks. “It makes far more productive outcomes when people can make the best decisions for themselves, rather than relying on the government to make decisions for them.”
Lawmakers though pressed Bickert to better clarify the rules, with one Republican, Rep. Larry Bucshon (Ind.), asking the Facebook executive how the company identified an “average person” under their guidelines for detecting deepfakes.
Bickert said the company was working with experts to detail the best approach.
“Congressman, these are exactly the questions we’ve been discussing with more than 50 experts as we’ve tried to write this policy,” Bickert replied, adding that Facebook is focused on making more information available to the public.
Critics have also called Facebook’s new policy vague.
“The notion of misleading an average person … in order to evaluate this policy as good, bad or pretty much impotent — which is what I see it as — you would need to know what the process of adjudication is here and how they’re considering these issues,” Britt Paris, an expert on audiovisual manipulation at Rutgers University, told The Hill. “It’s pretty toothless and incompetent language.”
A spokesperson for Facebook declined to clarify the term further to The Hill.
Facebook is one of many tech giants struggling to deal with disinformation on their platforms, but the company has been a particular target for critics over its refusal to take down the edited videos of Pelosi and Biden.
Spokesmen for both Pelosi and Biden were quick to blast Facebook’s policies as being inadequate.
“Facebook’s announcement today is not a policy meant to fix the very real problem of disinformation that is undermining faith in our electoral process, but is instead an illusion of progress,” Biden campaign spokesman Bill Russo told The Hill.
“Facebook’s policy does not get to the core issue of how their platform is being used to spread disinformation, but rather how professionally that disinformation is created. Banning deepfakes should be an incredibly low floor in combating disinformation. Today’s announcement falls short of the mark again,” he said.
Pelosi spokesman Drew Hammill tweeted on Tuesday that the “real problem is Facebook’s refusal to stop the spread of disinformation.”
Rep. Darren Soto (D-Fla.) pressed Facebook about its approach to manipulated videos that don’t qualify as deepfakes during the hearing.
“Why wouldn’t Facebook simply take down the fake Pelosi video?” he asked.
“Our approach is to give people more information, so that if something’s going to be in the public discourse they will know how to assess it, how to contextualize it,” Bickert responded.
“[The Pelosi video] was labeled false at the time, we think we could have gotten that to fact-checkers quicker and we think the label could have been clearer. We now have the label for something that has been rated false [so] you have to click through it, it actually obscures the image.”
The emergence of deepfakes, which use technologies like artificial intelligence and machine learning to manipulate videos, has raised serious concerns from lawmakers.
The technology has not advanced far enough for forged videos to be indistinguishable from real ones, but experts warn that moment is rapidly approaching.
“We’re in a technological arms race, as detection technology improves, so does the deceptive technology,” Rep. Frank Pallone Jr. (D-N.J.), the chairman of the full Energy and Commerce Committee, said during Wednesday’s hearing.
Many experts have warned that along with the increasing quality of deepfakes, the proliferation of artificial intelligence technology may soon allow any online user to create deepfakes.
Amid worries about the future of deepfakes, experts at the hearing said social media platforms should first focus on doing more to stop more simple forms of deceptive content manipulation, such as the Pelosi and Biden videos.
Cheapfakes — like the slowing down or selective cropping used in the Pelosi and Biden videos — “are a wider threat,” Joan Donovan, research director of the Technology and Social Change Project at the Harvard Kennedy School, said at the hearing.
“The world online is the real world, and this crisis of counterfeits threatens to disrupt the way Americans live our real lives,” she continued.
“Right now, malicious actors jeopardize how we make informed decisions about who to vote for and what causes we support, while platform companies’ own products facilitate this manipulation, placing our democracy and economy at significant risk.”
Updated at 4:02 p.m.
The Hill has removed its comment section, as there are many other forums for readers to participate in the conversation. We invite you to join the discussion on Facebook and Twitter.