Home > Media News > Google Will Hire Thousands Of Moderators To Control Abusive Content

Google Will Hire Thousands Of Moderators To Control Abusive Content
6 Dec, 2017 / 12:11 PM / OMNES News

Source: https://www.theguardian.com

894 Views

Google is hiring thousands of new moderators after facing widespread criticism for allowing child abuse videos and other violent and offensive content to flourish on YouTube.

YouTube’s owner announced on Monday that next year it would expand its total workforce to more than 10,000 people responsible for reviewing content that could violate its policies. The news from YouTube’s CEO, Susan Wojcicki, followed a steady stream of negative press surrounding the site’s role in spreading harassing videos, misinformation, hate speech and content that is harmful to children.

Wojcicki said that in addition to an increase in human moderators, YouTube is continuing to develop advanced machine-learning technology to automatically flag problematic content for removal. The company said its new efforts to protect children from dangerous and abusive content and block hate speech on the site were modeled after the company’s ongoing work to fight violent extremist content.

“Human reviewers remain essential to both removing content and training machine learning systems because human judgment is critical to making contextualized decisions on content,” the CEO wrote in a blogpost, saying that moderators have manually reviewed nearly 2m videos for violent extremist content since June, helping train machine-learning systems to identify similar footage in the future.

In recent weeks, YouTube has used machine learning technology to help human moderators find and shut down hundreds of accounts and hundreds of thousands of comments, according to Wojcicki.

YouTube faced heightened scrutiny last month in the wake of reports that it was allowing violent content to slip past the YouTube Kids filter, which is supposed to block any content that is not appropriate to young users. Some parents recently discovered that YouTube Kids was allowing children to see videos with familiar characters in violent or lewd scenarios, along with nursery rhymes mixed with disturbing imagery, according to the New York Times.

Other reports uncovered “verified” channels featuring child exploitation videos, including viral footage of screaming children being mock-tortured and webcams of young girls in revealing clothing.

YouTube has also repeatedly sparked outrage for its role in perpetuating misinformation and harassing videos in the wake of mass shootings and other national tragedies. The Guardian found that survivors and the relatives of victims of numerous shootings have been subject to a wide range of online abuse and threats, some tied to popular conspiracy theory ideas featured prominently on YouTube.

Some parents of people killed in high-profile shootings have spent countless hours trying to report abusive videos about their deceased children and have repeatedly called on Google to hire more moderators and to better enforce its policies. It’s unclear, however, how the expansion of moderators announced on Monday might affect this kind of content, since YouTube said it was focused on hate speech and child safety.

Although the recent scandals have illustrated the current limits of the algorithms in detecting and removing violating content, Wojcicki made clear that YouTube would continue to heavily rely on machine learning, a necessary factor given the scale of the problem.

YouTube said machine learning was helping its human moderators remove nearly five times as many videos as they were previously, and that 98% of videos removed for violent extremism are now flagged by algorithms. Wojcicki claimed that advances in the technology allowed the site to take down nearly 70% of violent extremist content within eight hours of it being uploaded.

The statement also said YouTube was reforming its advertising policies, saying it would apply stricter criteria, conduct more manual curation and expand its team of ad reviewers. Last month, a number of high-profile brands suspended YouTube and Google advertising after reports revealed that they were placed alongside videos filled with exploitative and sexually explicit comments about children.

In March, a number of corporations also pulled their YouTube ads after learning that they were linked to videos with hate speech and extremist content.