Home > Media News >
Source: https://www.theguardian.com
By: John Naughton
Jeremy Paxman, who once served as Newsnight’s answer to the pit-bull terrier, famously outlined his philosophy in interviewing prominent politicians thus: “Why is this lying bastard lying to me?” This was unduly prescriptive: not all of Paxman’s interviewees were outright liars; they were merely practitioners of the art of being “economical with the truth”, but it served as a useful heuristic for a busy interviewer.
Maybe the time has come to apply the same heuristic to Facebook’s public statements. An informative case study is provided by the company’s revelations last week that in the first three months of this year it had discovered – and disabled – 538m fake accounts. This was in addition to “the millions of fake accounts we prevent daily from registering with Facebook”. In the same period, the company also took down 21m pieces of adult nudity and sexual activity, 3.1m pieces of violent content and 2.5m pieces of hate speech.
Facebook divides fake accounts into two categories: ones that are misclassified by users who create profiles for businesses, organisations or even pets rather than a proper personal profile; and “undesirable accounts” – user accounts created for antisocial purposes such as spamming, fake news, hate speech etc. The problem with the figures released last week is that they don’t tell us what proportion of the 538m were in the “undesirable” category.
Why is this interesting? Answer: it seems to suggest either a dramatic change in external circumstances or that the company – wilfully or unknowingly – underestimated the problem of fake accounts in the past. An analysis of Facebook’s SEC filings, for example, suggests that in 2016 the number of fake accounts was about 18.6m. In 2017, the figure had grown to somewhere in the region of 64-85m. But now it’s more than half a billion.
Of course, there may be an innocent explanation for all this. Candidates that come to mind include: a significant uptick in political polarisation; the war in Syria; Isis’s pivot from land warfare to cyber; the forthcoming Irish abortion referendum; and Russia’s growing sophistication in weaponising social media. But if Facebook fails to provide more detail the temptation will be to assume that it’s just being economical with the actualité.
Faced with the torrent of hate, filth and propaganda that floods on to its platform, Facebook is betting the ranch on artificial intelligence (AI) as the solution to the problem. The figures released last week suggest that AI works well against some pestilences – detecting spam (nearly 100% success), terrorist propaganda (99.5%), fake accounts (99.5%) and adult nudity and sexual activity (95.8%). But it struggles with hate speech, where AI only caught 38%, which is no surprise to either the company or the rest of us.
All of these numbers apply only to what happens on the Facebook platform as its owner, belatedly, struggles to police it. As it gets a grip on the problem, however, it will run into what is known in the real world as “the CCTV effect”: while surveillance cameras reduce the level of street crime in the spaces that they monitor, criminals move to places where there are no cameras. And this is exactly what is happening to Facebook: as its main public space becomes better policed, the bad actors are moving to WhatsApp, which Facebook also owns, but that is more difficult to police because everything on WhatsApp is encrypted.
Cue the fiercely contested election in the Indian state of Karnataka, in which Narendra Modi’s BJP tried to oust the Indian Congress party from power. The end result, announced last week, was a hung assembly and the likelihood that a coalition of the Congress party and the Janata Dal (Secular) party will form a government.
What’s significant about this campaign was the extent to which it was fought not on Facebook but on WhatsApp. Politics in Karnataka, which is predominantly Hindu like most of India, traditionally involves pitting Hindus against the Muslim minority, and various Hindu castes against one another. A perfect environment for weaponising social media, in other words.
Which, as the New York Times reported, is exactly what happened. Rightwing Hindu groups used WhatsApp to disseminate a grisly video that was described as an attack on a Hindu woman by a Muslim mob but was in fact a lynching in Guatemala. One audio recording on the service from an anonymous sender urged all Muslims in the state to vote for the Congress party “for the safety of our women and children”. And of course there were the usual fake polls, including one purportedly from the BBC predicting a BJP landslide and other staples of populist propaganda.
So even as Facebook catches up with what’s being going on under its corporate nose, we find a new can of (encrypted) worms opening. So when Facebook offers its next “explanation”, let us – in the spirit of Mr Paxman – ask: why are these tech bastards lying to us?