Home > Media News > Facebook Uses AI to Handle Moderation Duties on its Platform

Facebook Uses AI to Handle Moderation Duties on its Platform
15 Nov, 2020 / 01:21 pm / Omnes Media

971 Views

Facebook started using Artificial Intelligence to handle moderation duties on its platform . The company will do moderation with posts that are thought to violate the company’s rules which includes everything from spam to hate speech and violent content that are flagged, either by users or machine learning filters. Some very clear-cut cases are dealt with automatically while the rest go into a queue for review by human moderators.

Facebook employs about 15,000 of these moderators around the world, and has been criticized in the past for not giving these workers enough support, employing them in conditions that can lead to trauma . Their job is to sort through flagged posts and make decisions about whether or not they violate the company’s various policies.

In the past, moderators reviewed posts more or less chronologically, dealing with them in the order they were reported. Now, Facebook says it wants to make sure the most important posts are seen first, and is using machine learning to help. In the future, an amalgam of various machine learning algorithms will be used to sort this queue, prioritizing posts based on three criteria: their virality, their severity, and the likelihood they’re breaking the rules.

“All content violations will still receive some substantial human review, but we’ll be using this system to better prioritize [that process],” Ryan Barnes, a product manager with Facebook’s community integrity team, told reporters during a press briefing.

Facebook has shared some details on how its machine learning filters analyze posts in the past. These systems include a model named “WPIE,” which stands for “whole post integrity embeddings” and takes what Facebook calls a “holistic” approach to assessing content.

Facebook’s use of AI to moderate its platforms has come in for scrutiny in the past, with critics noting that artificial intelligence lacks a human’s capacity to judge the context of a lot of online communication. Especially with topics like misinformation, bullying, and harassment, it can be near impossible for a computer to know what it’s looking at.

Facebook’s Chris Palow, a software engineer in the company’s interaction integrity team, agreed that AI had its limits, but told reporters that the technology could still play a role in removing unwanted content. “The system is about marrying AI and human reviewers to make less total mistakes,” said Palow. “The AI is never going to be perfect.”

Source- The Verge

Country- U.S