Home > Media News >
Facebook took action against tens of millions of posts, photos, and videos over the past six months for violating its rules that prohibit hate speech, harassment, and child sexual exploitation, illustrating the vast scale of the tech giant’s task in cleaning up its services from harm and abuse.
It also said it removed 3.2 billion fake accounts from its service from April to September, up slightly from 3 billion in the previous six months.
Nearly all of the bogus accounts were caught before they had a chance to become “active” users of the social network, so they are not counted in the user figures the company reports regularly. Facebook estimates that about 5 percent of its 2.45 billion user accounts are fake.
The world’s biggest social network also disclosed for the first time how many posts it removed from popular photo-sharing app Instagram, which has been identified as a growing area of concern about fake news by disinformation researchers.
Proactive detection of violating content was lower across all categories on Instagram than on Facebook’s flagship app, where the company initially implemented many of its detection tools, the company said in its fourth content moderation report. The company said it proactively detected content affiliated with terrorist organizations 98.5% of the time on Facebook and 92.2% of the time on Instagram. It removed more than 11.6 million pieces of content depicting child nudity and sexual exploitation of children on Facebook and 754,000 pieces on Instagram during the third quarter.
Facebook also added data on actions it took around content involving self-harm for the first time in the report. It said it had removed about 2.5 million posts in the third quarter that depicted or encouraged suicide or self-injury. The company also removed about 4.4 million pieces involving drug sales during the quarter, it said in a blog post.
The company revealed the data as part of its latest transparency report, which Facebook said reflected its still-improving efforts to use artificial intelligence to spot harmful content before users ever see it and outwit those who try to evade its censors.