Facebook says it removed millions of posts over hate speech, child exploitation violations


Facebook on Wednesday announced it has pulled down millions of posts over the past three months for violating its policies against hate speech and child exploitation, marking an increase in the number of posts it took action against amid heightened scrutiny of how the company polices its enormous social networks. 

Facebook’s latest transparency report, which was released on Wednesday, explains in more granular detail than ever before which posts the company is removing from its main social network as well as its popular image-sharing platform Instagram. 

{mosads}According to the report, between July and September, Facebook took action against 11.6 million posts, images or videos for depicting child sexual exploitation, 7 million for promoting hate speech, 3.2 million for bullying or harassment violations and 5.2 million for sharing terrorist propaganda. Facebook has more than 2.4 billion users.

On Instagram during that time period, the company took action against 754,000 posts depicting child exploitation and 133,000 posts for violating its policies against terrorist content. Facebook’s transparency report did not detail the number of posts removed from Instagram for violating its policies against hate speech or harassment. Instagram has approximately 1 billion users.

On a press call, Facebook CEO Mark Zuckerberg implicitly hit rivals including Twitter and Google’s YouTube, which do not offer transparency reports with the same level of detail. 

“Some folks look at the numbers that we’re putting out … and come to the conclusion that because we’re reporting big numbers, that must mean so much more harmful content is happening on our services than others,” Zuckerberg said. “I don’t think that’s what this says at all.” 

He added that he believes the enormous numbers reported show Facebook is “working harder” than other companies to identify, take down and offer details on such content decisions. 

For the first time, the transparency report offered statistics on how many posts related to suicide and self-harm it has taken down or limited. It removed 2.5 million posts on Facebook for depicting or encouraging suicide or self-harm, while it removed about 845,000 posts from Instagram. 

Facebook officials on the call emphasized that the company is walking a fine line as it seeks to keep up posts about mental health issues from people going through recovery and those that could trigger other users to hurt themselves. 

Zuckerberg said the company is investing billions of dollars and dedicating over 35,000 employees to dealing with “safety” on the platform.

“This is some of the worst content that’s out there,” he said.

For years, critics have hammered Facebook over the deluge of disturbing and sometimes violent content that spills across the platform, even as the company continues to tighten its content-moderation policies.

The latest report comes as lawmakers on Capitol Hill have increasingly raised the specter of making Facebook legally liable for the content posted in its platforms. Right now, social media companies like Facebook and Twitter are protected from most lawsuits related to the posts and images circulating on their networks thanks to a provision in a 1996 law. But lawmakers have warned that they might tweak that provision, known as Section 230, in order to ensure social media companies have a legal responsibility to remove posts from terrorists, pedophiles or criminals.

Facebook uses a mixture of human review and artificial intelligence to identify content violations, and it says those automated systems are improving. Facebook identifies most of the content it removes using software.  

“While we are pleased with this progress, these technologies are not perfect and we know that mistakes can still happen,” Facebook’s vice president of integrity, Guy Rosen, wrote in a post on Wednesday.

—Last updated at 3:35 p.m.

Tags Mark Zuckerberg
See all Hill.TV See all Video

Most Popular

Load more


See all Video