Home > Media News > Social Media is Giving Us Trypophobia

Social Media is Giving Us Trypophobia
28 Jan, 2018 / 11:01 AM / OMNES News

Source: https://techcrunch.com

881 Views

by Natasha Lomas

Something is rotten in the state of technology

But amid all the hand-wringing over fake news, the cries of election deforming Kremlin disinformation plots, the calls from political podia for tech giants to locate a social conscience, a knottier realization is taking shape.

Fake news and disinformation are just a few of the symptoms of what’s wrong and what’s rotten. The problem with platform giants is something far more fundamental.

The problem is these vastly powerful algorithmic engines are blackboxes. And, at the business end of the operation, each individual user only sees what each individual user sees.

The great lie of social media has been to claim it shows us the world. And their follow-on deception: That their technology products bring us closer together.

In truth, social media is not a telescopic lens — as the telephone actually was — but an opinion-fracturing prism that shatters social cohesion by replacing a shared public sphere and its dynamically overlapping discourse with a wall of increasingly concentrated filter bubbles.

Social media is not connective tissue but engineered segmentation that treats each pair of human eyeballs as a discrete unit to be plucked out and separated off from its fellows.

Think about it, it’s a trypophobic’s nightmare.

Or the panopticon in reverse — each user bricked into an individual cell that’s surveilled from the platform controller’s tinted glass tower.

Little wonder lies spread and inflate so quickly via products that are not only hyper-accelerating the rate at which information can travel but deliberately pickling people inside a stew of their own prejudices.

First it panders then it polarizes then it pushes us apart.

We aren’t so much seeing through a lens darkly when we log onto Facebook or peer at personalized search results on Google, we’re being individually strapped into a custom-moulded headset that’s continuously screening a bespoke movie — in the dark, in a single-seater theatre, without any windows or doors.

Are you feeling claustrophobic yet?

It’s a movie that the algorithmic engine believes you’ll like. Because it’s figured out your favorite actors. It knows what genre you skew to. The nightmares that keep you up at night. The first thing you think about in the morning.

It knows your politics, who your friends are, where you go. It watches you ceaselessly and packages this intelligence into a bespoke, tailor-made, ever-iterating, emotion-tugging product just for you.

Its secret recipe is an infinite blend of your personal likes and dislikes, scraped off the Internet where you unwittingly scatter them. (Your offline habits aren’t safe from its harvest either — it pays data brokers to snitch on those too.)

No one else will ever get to see this movie. Or even know it exists. There are no adverts announcing it’s screening. Why bother putting up billboards for a movie made just for you? Anyway, the personalized content is all but guaranteed to strap you in your seat.

If social media platforms were sausage factories we could at least intercept the delivery lorry on its way out of the gate to probe the chemistry of the flesh-colored substance inside each packet — and find out if it’s really as palatable as they claim.

Of course we’d still have to do that thousands of times to get meaningful data on what was being piped inside each custom sachet. But it could be done.

Alas, platforms involve no such physical product, and leave no such physical trace for us to investigate.

Smoke and mirrors
Understanding platforms’ information-shaping processes would require access to their algorithmic blackboxes. But those are locked up inside corporate HQs — behind big signs marked: ‘Proprietary! No visitors! Commercially sensitive IP!’

Only engineers and owners get to peer in. And even they don’t necessarily always understand the decisions their machines are making.

But how sustainable is this asymmetry? If we, the wider society — on whom platforms depend for data, eyeballs, content and revenue; we are their business model — can’t see how we are being divided by what they individually drip-feed us, how can we judge what the technology is doing to us, one and all? And figure out how it’s systemizing and reshaping society?

How can we hope to measure its impact? Except when and where we feel its harms.

Without access to meaningful data how can we tell whether time spent here or there or on any of these prejudice-pandering advertiser platforms can ever be said to be “time well spent“?

What does it tell us about the attention-sucking power that tech giants hold over us when — just one example — a train station has to put up signs warning parents to stop looking at their smartphones and point their eyes at their children instead?

Is there a new idiot wind blowing through society of a sudden? Or are we been unfairly robbed of our attention?

What should we think when tech CEOs confess they don’t want kids in their family anywhere near the products they’re pushing on everyone else? It sure sounds like even they think this stuff might be the new nicotine.

External researchers have been trying their best to map and analyze flows of online opinion and influence in an attempt to quantify platform giants’ societal impacts.

Yet Twitter, for one, actively degrades these efforts by playing pick and choose from its gatekeeper position — rubbishing any studies with results it doesn’t like by claiming the picture is flawed because it’s incomplete.

Why? Because external researchers don’t have access to all its information flows. Why? Because they can’t see how data is shaped by Twitter’s algorithms, or how each individual Twitter user might (or might not) have flipped a content suppression switch which can also — says Twitter — mould the sausage and determine who consumes it.

Why not? Because Twitter doesn’t give outsiders that kind of access. Sorry, didn’t you see the sign?

And when politicians press the company to provide the full picture — based on the data that only Twitter can see — they just get fed more self-selected scraps shaped by Twitter’s corporate self-interest.

(This particular game of ‘whack an awkward question’ / ‘hide the unsightly mole’ could run and run and run. Yet it also doesn’t seem, long term, to be a very politically sustainable one — however much quiz games might be suddenly back in fashion.)

And how can we trust Facebook to create robust and rigorous disclosure systems around political advertising when the company has been shown failing to uphold its existing ad standards?

Mark Zuckerberg wants us to believe we can trust him to do the right thing. Yet he is also the powerful tech CEO who studiously ignored concerns that malicious disinformation was running rampant on his platform. Who even ignored specific warnings that fake news could impact democracy — from some pretty knowledgeable political insiders and mentors too.

Biased blackboxes
Before fake news became an existential crisis for Facebook’s business, Zuckerberg’s standard line of defense to any raised content concern was deflection — that infamous claim ‘we’re not a media company; we’re a tech company’.

Turns out maybe he was right to say that. Because maybe big tech platforms really do require a new type of bespoke regulation. One that reflects the uniquely hypertargeted nature of the individualized product their factories are churning out at — trypophobics look away now! —  4BN+ eyeball scale.

In recent years there have been calls for regulators to have access to algorithmic blackboxes to lift the lids on engines that act on us yet which we (the product) are prevented from seeing (and thus overseeing).

Rising use of AI certainly makes that case stronger, with the risk of prejudices scaling as fast and far as tech platforms if they get blindbaked into commercially privileged blackboxes.

Do we think it’s right and fair to automate disadvantage? At least until the complaints get loud enough and egregious enough that someone somewhere with enough influence notices and cries foul?

 
Algorithmic accountability should not mean that a critical mass of human suffering is needed to reverse engineer a technological failure. We should absolutely demand proper processes and meaningful accountability. Whatever it takes to get there.

And if powerful platforms are perceived to be footdragging and truth-shaping every time they’re asked to provide answers to questions that scale far beyond their own commercial interests — answers, let me stress it again, that only they hold — then calls to crack open their blackboxes will become a clamor because they will have fulsome public support.

Lawmakers are already alert to the phrase algorithmic accountability. It’s on their lips and in their rhetoric. Risks are being articulated. Extant harms are being weighed. Algorithmic blackboxes are losing their deflective public sheen — a decade+ into platform giant’s huge hyperpersonalization experiment.

No one would now doubt these platforms impact and shape the public discourse. But, arguably, in recent years, they’ve made the public street coarser, angrier, more outrage-prone, less constructive, as algorithms have rewarded trolls and provocateurs who best played their games.

So all it would take is for enough people — enough ‘users’ — to join the dots and realize what it is that’s been making them feel so uneasy and queasy online — and these products will wither on the vine, as others have before.

There’s no engineering workaround for that either. Even if generative AIs get so good at dreaming up content that they could substitute a significant chunk of humanity’s sweating toil, they’d still never possess the biological eyeballs required to blink forth the ad dollars the tech giants depend on. (The phrase ‘user generated content platform’ should really be bookended with the unmentioned yet entirely salient point: ‘and user consumed’.)

This week the UK prime minister, Theresa May, used a Davos podium World Economic Forum speech to slam social media platforms for failing to operate with a social conscience.

And after laying into the likes of Facebook, Twitter and Google — for, as she tells it, facilitating child abuse, modern slavery and spreading terrorist and extremist content — she pointed to a Edelman survey showing a global erosion of trust in social media (and a simultaneous leap in trust for journalism).

Her subtext was clear: Where tech giants are concerned, world leaders now feel both willing and able to sharpen the knives.

Nor was she the only Davos speaker roasting social media either.

“Facebook and Google have grown into ever more powerful monopolies, they have become obstacles to innovation, and they have caused a variety of problems of which we are only now beginning to become aware,” said billionaire US philanthropist George Soros, calling — out-and-out — for regulatory action to break the hold platforms have built over us.

And while politicians (and journalists — and most probably Soros too) are used to being roundly hated, tech firms most certainly are not. These companies have basked in the halo that’s perma-attached to the word “innovation” for years. ‘Mainstream backlash’ isn’t in their lexicon. Just like ‘social responsibility’ wasn’t until very recently.

You only have to look at the worry lines etched on Zuckerberg’s face to see how ill-prepared Silicon Valley’s boy kings are to deal with roiling public anger.


Guessing games
The opacity of big tech platforms has another harmful and dehumanizing impact — not just for their data-mined users but for their content creators too.

A platform like YouTube, which depends on a volunteer army of makers to keep content flowing across the countless screens that pull the billions of streams off of its platform (and stream the billions of ad dollars into Google’s coffers), nonetheless operates with an opaque screen pulled down between itself and its creators.

YouTube has a set of content policies which it says its content uploaders must abide by. But Google has not consistently enforced these policies. And a media scandal or an advertiser boycott can trigger sudden spurts of enforcement action that leave creators scrambling not to be shut out in the cold.

One creator, who originally got in touch with TechCrunch because she was given a safety strike on a satirical video about the Tide Pod Challenge, describes being managed by YouTube’s heavily automated systems as an “omnipresent headache” and a dehumanizing guessing game.

“Most of my issues on YouTube are the result of automated ratings, anonymous flags (which are abused) and anonymous, vague help from anonymous email support with limited corrective powers,” Aimee Davison told us. “It will take direct human interaction and negotiation to improve partner relations on YouTube and clear, explicit notice of consistent guidelines.”

“YouTube needs to grade its content adequately without engaging in excessive artistic censorship — and they need to humanize our account management,” she added.

Yet YouTube has not even been doing a good job of managing its most high profile content creators. Aka its ‘YouTube stars’.

But where does the blame really lie when ‘star’ YouTube creator Logan Paul — an erstwhile Preferred Partner on Google’s ad platform — uploads a video of himself making jokes beside the dead body of a suicide victim?

Paul must manage his own conscience. But blame must also scale beyond any one individual who is being algorithmically managed (read: manipulated) on a platform to produce content that literally enriches Google because people are being guided by its reward system.

In Paul’s case YouTube staff had also manually reviewed and approved his video. So even when YouTube claims it has human eyeballs reviewing content those eyeballs don’t appear to have adequate time and tools to be able to do the work.

And no wonder, given how massive the task is.

Google has said it will increase headcount of staff who carry out moderation and other enforcement duties to 10,000 this year.

Yet that number is as nothing vs the amount of content being uploaded to YouTube. (According to Statista, 400 hours of video were being uploaded to YouTube every minute as of July 2015; it could easily have risen to 600 or 700 hours per minute by now.)

The sheer size of YouTube’s free-to-upload content platform all but makes it impossible to meaningfully moderate.

And that’s an existential problem when the platform’s massive size, pervasive tracking and individualized targeting technology also gives it the power to influence and shape society at large.

The company itself says its 1BN+ users constitute one-third of the entire Internet.

Throw in Google’s preference for hands-off (read: lower cost) algorithmic management of content and some of the societal impacts flowing from the decisions its machines are making are questionable — to put it politely.

Indeed, YouTube’s algorithms have been described by its own staff as having extremist tendencies.

The platform has also been accused of essentially automating online radicalization — by pushing viewers towards increasingly extreme and hateful views. Click on a video about a populist right wing pundit and end up — via algorithmic suggestion — pushed towards a neo-nazi hate group.

And the company’s suggested fix for this AI extremism problem? Yet more AI…

Yet it’s AI-powered platforms that have been caught amplifying fakes and accelerating hates and incentivizing sociopathy.

And it’s AI-powered moderation systems that are too stupid to judge context and understand nuance like humans do. (Or at least can when they’re given enough time to think.)

Zuckerberg himself said as much a year ago, as the scale of the existential crisis facing his company was beginning to become clear. “It’s worth noting that major advances in AI are required to understand text, photos and videos to judge whether they contain hate speech, graphic violence, sexually explicit content, and more,” he wrote then. “At our current pace of research, we hope to begin handling some of these cases in 2017, but others will not be possible for many years.”

‘Many years’ is tech CEO speak for ‘actually we might not EVER be able to engineer that’.

And if you’re talking about the very hard, very editorial problem of content moderation, identifying terrorism is actually a relatively narrow challenge.

Understanding satire — or even just knowing whether a piece of content has any kind of intrinsic value at all vs been purely worthless algorithmically groomed junk? Frankly speaking, I wouldn’t hold my breath waiting for the robot that can do that.

Especially not when — across the spectrum — people are crying out for tech firms to show more humanity. And tech firms are still trying to force-feed us more AI.