Twitter to label, remove manipulated media

Twitter to label, remove manipulated media

Twitter updated its policy for manipulated media on Tuesday to now potentially label or remove posts containing that kind of content.  

The social media platform had first circulated the new policy in November to ask for feedback, and received over 6,500 responses, according to the company.

Under the policy announced Tuesday, a three-step test will be used to determine if media violates Twitter rules and how it will be treated. 

ADVERTISEMENT

“First, we look at whether the media are synthetic or manipulated,” Yoel Roth, Twitter’s head of site integrity, told reporters during a call discussing the policy. 

“Second, we assess whether the media are being shared on Twitter in a deceptive manner. This looks at the behavior around the sharing," he continued.

"And finally, we evaluate whether the content is likely to cause serious harm to people,” Roth added.

Based on those three tests, Twitter will determine whether to label or remove content, as shown by this table provided by the company.

Twitter may also show a warning to people attempting to share a tweet containing manipulated media, reduce its visibility or provide additional context. The company says it will take all those steps “in most cases” of tweets it labels.

ADVERTISEMENT

The social media platform will start placing the labels on March 5.

"Deceptive" sharing of posts containing manipulated media will also be banned.

Twitter will evaluate content flagged by users for potential misinformation under the new policy. A spokesperson for the company told The Hill on Tuesday that the platform will not actively locate manipulated media itself.

Twitter’s new policy comes amid growing concern that manipulated photos and videos — dubbed deepfakes when that manipulation is done using artificial intelligence — could be used to distort and misdirect political processes.

Two cases of manipulated media on social media platforms have grabbed headlines this year: a video of House Speaker Nancy PelosiNancy PelosiSchumer calls for military official to act as medical equipment czar Overnight Health Care: Trump resists pressure for nationwide stay-at-home order | Trump open to speaking to Biden about virus response | Fauci gets security detail | Outbreak creates emergency in nursing homes McConnell: Pelosi trying to 'jam' Senate on fourth coronavirus relief bill MORE (D-Calif.) slowed down to make her appear intoxicated and a video of former Vice President Joe BidenJoe BidenBiden campaign: Trump and former vice president will have phone call about coronavirus Overnight Health Care: Trump resists pressure for nationwide stay-at-home order | Trump open to speaking to Biden about virus response | Fauci gets security detail | Outbreak creates emergency in nursing homes 16 things to know today about coronavirus outbreak MORE cut to make him appear to be espousing white nationalist talking points. 

In both cases, Facebook and Twitter — the platforms that contained the Pelosi and Biden content, respectively — did not remove the videos despite outcry.

Under Twitter’s new rules, the video of Pelosi would be labelled at a minimum, Roth told reporters.

“Since the video is significantly and deceptively altered, we would label it under this policy,” he said. 

“Depending on what the tweet sharing that video says, we might choose to remove specific tweets. So if the tweet is deliberately misleading, and it's framed in a way that was likely to cause harm, we'll remove it. But the baseline for any instance of sharing the video would be that we provide a label.” 

The Biden video, which was not edited apart from being cut to appear out of context, would qualify as manipulated as well.

“Selective editing is a form of media manipulation that we would take action on under this policy,” Roth said.

Twitter’s new rules come a month after Facebook announced it would ban deepfake videos on the platform.

The policy focuses on videos that have been “edited or synthesized” by technology like artificial intelligence in a way that is not "apparent to an average person,” meaning the two cases discussed above would not be removed under the deepfake rule.

Updated: 5:23 p.m.