Home > Media News >
Source: https://www.ft.com
When Adam Hildreth started Dubit, an early social network for children during the last tech boom, he never imagined that he would end up at the forefront of fighting fake news influencing elections.
But the teen entrepreneur of 1999 has grown up to become the chief executive of Crisp, trying to apply what he learnt moderating cyberbullying to stop the spread of terrorist content online — and now experimenting with how to combat misinformation campaigns.
Crisp helps brands protect their reputation on social media, using an algorithm to crawl the internet — including the dark web — to understand who is distributing content online.
“It is the opposite of a search engine, it graphs all the places you don’t want to visit, all the place advertisers don’t want to be seen,” says Mr Hildreth.
Crisp is one of a group of companies starting to see an opportunity in helping either the social platforms tackle the problem or working for the victims of fake news, be they companies or governments.
While large internet companies such as Facebook, Google and Twitter struggle to find ways to stop the spread of misinformation online without abandoning their algorithms or business models, smaller start-ups are looking for ways to help clients willing to pay for extra help fighting fake news.
Some such as Crisp or New Knowledge started out fighting terrorism. Others such as Cisco and Digital Shadows are seeing the parallels with cyber security, using tactics developed to defend against hackers to battle against fake news.
Crisp, based in Leeds, has 120 employees and 300 contractors who help train its technology. It has been working with social platforms to try to make moderation more efficient. Under political pressure to show they are taking misinformation campaigns seriously, both Facebook and Google have announced significant expansions of their moderation teams in recent months.
“The big challenge is that so much is uploaded every minute,” Mr Hildreth says.
Crisp also helps brands look for anything that could damage their reputation, including real or fake news.
Jonathon Morgan, chief executive of New Knowledge, a start-up based in Austin, Texas, was a professional blogger who became an expert on Isis’s use of social media. Now he is trying to help companies, political campaigns and social justice organisations understand how online communities can be manipulated.
New Knowledge has seen revenues double in the past six months since it started focusing on misinformation. It uses machine learning technology to identify bots and break down different topics of conversation to spot where people are able to change the language used to discuss a topic, a sign that a community may be changing its beliefs.
Mr Morgan says that if organisations spot the misinformation early enough, they can take action.
“Let’s say we detect early on that people are working together to push a narrative that Beyoncé is a Russian spy,” he says. “That’s ridiculous. So if we see that early enough, before it is trending on Twitter or on InfoWars or Fox, we can come up with an alternative: Beyoncé is an American patriot.”
Cisco, the networking equipment company with a large cyber security arm, won a Fake News Challenge to design technologies that can help people detect the “stance” of a news article.
Researchers used natural language processing, whereby a computer is taught to understand the nuances of human speech, to detect whether a headline is related to the body of the text because many fake news stories copy the model of clickbait to lure people to visit a website. The team won by combining machine learning techniques, which are inspired by biological processes, including decision trees, a predictive modelling approach.
Digital Shadows, a San Francisco and London-based cyber security company, specialises in understanding what hackers are doing on the dark web. Companies often turn to the company to monitor if, for example, large databases full of their customer data were for sale, evidence that they have experienced a security breach. It combines technology with threat intelligence analysts, some with military backgrounds and many of whom speak foreign languages.
Alastair Paterson, chief executive of Digital Shadows, says the fake news that was spread during the US election used similar techniques as hacking groups.
“There’s an interesting crossover between social media and cyber security right now more than ever before,” he says. “Social networks have so far been very impotent in doing anything about it.”
Digital Shadows counts broadcasters among its clients. For some of the largest organisations, it has identified and issued takedown notices for fake websites and social media accounts in more than 100 separate incidents.
For other companies, it also finds fake domains and social media profiles. It once found an entire phoney arm of a Dutch company that had been set up online.
Distil Networks, is another cyber security company with skills that could help solve the problems faced by social media. The company specialises in detecting bots, often used to amplify a message in the hope that it trends online.
Edward Roberts, director of product marketing at Distil Networks, says bots are becoming increasingly clever as they learn to evade detection by mimicking human behaviour. “They are pausing on pages for random periods of time, they are clicking through at different rates, they are moving their mouse in less automated ways.”
But he says, it is good that social media platforms have realised they have a problem because they can find ways to identify bots in the same way they tag rogue messages and emails as spam.
“Now today, we rarely see spam, it all goes to the spam folder,” he says. “It is probably not an existential threat they are dealing with.”
Top Stories