Home > Media News >
Source: https://digiday.com
YouTube is tightening up requirements for content creators to quell advertiser concerns following the Logan Paul controversy. Under YouTube’s new rules, creators of channels need more than 1,000 subscribers and 4,000 hours of watch time to earn money from ads, while videos on Google Preferred, which represent the top 5 percent of most-viewed channels, will be reviewed by humans, not algorithms, before they are monetized.
As the debate around brand safety matures, advertisers fall into one of two camps: those that think Google has taken positive steps to tackle the issue over the past 12 months and others who believe Google won’t exert more control over controversial content for fear of losing money. Advertisers publicly welcomed the latest overhaul but have lingering questions:
Will advertisers have to pay more for brand safety?
YouTube is rolling out a three-tier “suitability system” for advertisers to pick their level of comfort with the content they’re buying against. By taking these steps, Google may reduce YouTube’s reach for advertisers but ensure a safer environment for ads, said Norm Johnston, chief digital officer at Mindshare Worldwide. Chasing broad reach at any cost is what led to brand-safety problems in the first place, he said. But advertisers wonder if this approach means they will end up paying even more for brand safety.
Is AI the solution for better brand safety?
Google has said it will use artificial intelligence to sift through all the videos uploaded to channels on Google Preferred before they go through another check by employees. Google execs have told agencies that the algorithm has a 99.9 percent success rate in filtering out inappropriate content, said two separate agency bosses, speaking on condition of anonymity. As accurate as YouTube’s AI sounds, advertisers want to see some evidence of it working before they pump money back into Google Preferred. As Peter Wallace, U.K. commercial director at GumGum, said: “There will be questions about the scalability of Google’s proposals given the sheer volume of content being uploaded to YouTube.”
How big is the brand-safety risk for viral content?
Google has told advertisers that shielding ads from inappropriate videos is tough because so much of the content it hosts is time-sensitive. Advertisers want evidence of how big the issue is; they’ve asked YouTube to disclose the percentage of YouTube views that are time-sensitive versus those that build up over time.
Why won’t YouTube protect brands from inappropriate comments on videos?
Google decided not to turn off comments on kid-related and news videos despite pressure from advertisers to do so. The U.K. agency trade body the Institute of Practitioners in Advertising had proposed the move to the online giant last month when the two met behind closed doors. Google responded that such a move “wasn’t appropriate,” according to an executive who was there and spoke to Digiday on condition of anonymity.
How detailed will YouTube’s brand-safety reporting be?
YouTube has promised to provide regular transparency reports on brand safety. Advertisers and agencies wonder how granular those insights will be. Dan Larden, global strategic partnerships director at Infectious Media, said the agency is booking campaigns directly with content creators due to its brand-safety concerns on YouTube. “But this comes at a higher price and gives us less reach,” he said.