Home > Media News > How Advertisers Can Fight Fake News Instead Of Funding It

How Advertisers Can Fight Fake News Instead Of Funding It
1 Sep, 2017 / 05:00 pm / OMNES News

Source: http://www.thedrum.com

789 Views

“Facebook has a moral duty to prioritise veracity over virality,” according to Jon Snow.

The Channel Four news anchor took aim at the social network’s unwillingness to tackle the problem of fake news during a speech at the Edinburgh TV festival last week.

Citing a bogus news report that claimed the Pope endorsed Donald Trump for presidency and engaged a million users when it was shared on Facebook, Snow suggested the platform had “prioritised fakery on a massive scale,” and its reluctance to take action against fake news posed a real threat to democracy.

But it is not just Facebook that must act against fictitious content. Through the mechanism of programmatic, advertisers are unwittingly fueling the propagation of fake news. They must take steps to reverse this trend, cutting off funding for false reporting at the source, and ensuring the safety and reputation of their brand is not compromised by deceptive content.

So, how is the digital ecosystem supporting fake news and how can advertisers ensure they don’t end up paying the price for appearing next to inappropriate content?

Facebook and other social networks are instrumental in the rise of fake news because they enable the rapid dissemination of content to large audiences, at nominal cost and with minimal regulation.

The more controversial the content is the more attention it receives, and in the aftermath of the U.S. presidential election Buzzfeed revealed fake news on Facebook significantly outperformed genuine news for user engagement. When mass social audiences follow links to websites containing fake content, the programmatic ads served on those sites generate revenue for the publisher, which – aside from occasional political motivations – is the main purpose of fake news.

Facebook could argue it is already taking action, most recently by blocking pages that repeatedly share fake news from advertising on the platform, in addition to disallowing ads that link to fake news, and displaying publisher logos next to shared links. Now advertisers need to play their part in keeping their programmatic ads away from spurious content.

When ads appear in conjunction with fake news it impacts the brand in two ways. Not only does it mean the advertiser is funding the creation of false content, it is also damaging to brand perception, as the placement gives the impression the advertiser believes in, and endorses, the story.

A BrightRoll survey reveals 96% of advertisers are worried about fake news in programmatic advertising, but this concern is not great enough for them to give up on programmatic and lose the benefits it provides in reach, efficiency, and precise audience targeting. When brands choose programmatic, they inevitably sacrifice a degree of control over ad placement, and they must implement extra measures to avoid their ads appearing alongside undesirable content.

Emerging solutions to fake news

As issues of ad misplacement become commonplace, an abundance of brand safety providers has emerged, all promising to protect brands from associations with damaging content.

Unfortunately the majority of these providers rely on outdated and ineffective brand safety techniques such as keyword filtering, which offers no guarantee of brand safety. For example, a fake news story reporting actor Scott Baio had been killed in a plane crash included keywords such as ‘plane,’ ‘golf, and ‘Mar-A-Lago’ which would have travel brands opening their wallets, only to find themselves advertising next to a sick hoax.

Also techniques such as blacklisting at domain level are only effective if the solution is comprehensive enough to weed out all ambiguous sites. Some blacklisting ‘solutions’ work on the assumption all fake news exists on low quality websites such as gossip forums, so blocking these types of sites will keep brands safe, but this is not necessarily the case. Sometimes fake news is reported in good faith by reputable news sites.

For instance, the news the late founder of Corona beer had left €2m to each resident of his home village was reported by The Independent and The Metro – among others – before it was revealed to be false. In this instance, a fairly harmless story got out of hand, but the situation illustrates fakes news is not confined to less-than-desirable websites.

Even where fake news is published on low quality sites, techniques such as domain spoofing – which was used to great effect during the Methbot fraud – can trick advertisers into thinking their ads are being placed on premium publisher websites.

Machine learning takes charge

Fortunately, the industry is now seeing the emergence of more advanced tools that use intelligent machine learning and semantic targeting to deal with this issue.

There is naturally some scepticism about the use of algorithms to identify fake news. After all, if humans are taken in by it how are machines supposed to tell it apart from genuine content?

There are so many different definitions of fake news – from entirely fabricated content right through to genuine news reported in a biased way to serve a particular agenda – how can machines possibly distinguish between the fact and the fiction?

The truth is machines have a huge advantage over humans in spotting fake news. Because of the sheer volume of content they can analyse – far more than any human could read in their lifetime – they can identify patterns and word groupings associated with fake news.

These solutions use machine learning algorithms to interpret content at page level, determining its true meaning and context. They can identify which elements of text are relevant and analyse the relationships of individual words and phrases using a semantic algorithm based on independent training data to determine contextually focused classification.

Semantic classifications can be used in conjunction with keyword identification models to ensure increased accuracy and can incorporate standards provided by the Interactive Advertising Bureau (IAB) or other anti-media bias institutions. This content analysis takes place at a pre-bid level so the context of content is understood before any programmatic bid is placed.

Using these tools brands can target or steer clear of highly specific categories. While brands want to stay current and buy impressions around trending topics, most also want to avoid highly controversial content – whether fake or genuine – to avoid alienating large sections of their audiences. In addition to general and vertical specific categories, brands can create customised categories in response to current events that are likely to generate fake news.

Responsibility for eliminating fake news does not lie solely with advertisers; tackling the issue requires a coordinated effort from multiple groups, including governments and platforms such as Facebook. In the meantime, advertisers can take a stand against it and other dubious content by implementing pre-bid machine learning technologies to understand content at page level. This will allow them to enjoy the many benefits of programmatic without risking brand safety or supporting the global epidemic of fake news.