Home > Media News > What Can Be Done About Abuse On Social Media?

What Can Be Done About Abuse On Social Media?
13 Dec, 2017 / 11:14 AM / OMNES News

Source: https://www.theguardian.com

1002 Views

By Samuel Gibbs

Internet giants such as Facebook and Twitter have limited legal obligations, but what could Britain do to make them act?

Hosts, not publishers?

Currently, social media firms including Facebook, YouTube and Twitter operate as hosts, rather than publishers, in the UK. As such, they are exempt from being held legally liable for user-generated content if they act to remove content when notified.

How is content removed?

Facebook et al are obligated to remove illegal content when notified of its presence on their platforms. The current state of play for the vast majority of online hate and intimidation is a process of report, review and remove. A user reports a piece of content, it is reviewed by a human and removed if it violates community standards.

What about automated take-downs?

Automated systems are often used as part of the review process but, in some cases, they are also used for proactive removal of content. A group of the big US tech firms, called the Global Internet Forum, has created a database of 40,000 known pieces of terrorist content that allows companies to use digital signatures to identify and remove content faster, aiming for under two hours from upload.

What tools are available to protect yourself?

Most social media platforms rely on the ability to block or mute individuals, filter out certain phases or keywords, and report content and accounts for harassment.

Twitter has anti-abuse filters that block notifications from certain types of accounts which are not verified with a phone number or email address and temporarily freezes accounts where its machine learning systems detect signals of abuse.

Public figures often get stronger tools than the average individual, with more advanced filtration systems. Twitter’s “quality filter”, available only for public-figure “verified” accounts, is a more aggressive version of the anti-abuse filters, for instance.

Can you escalate cases to the police?

Social media companies can directly report incidents to the police, but most harassment is left to the victim to report. Some companies make that easier than others. Twitter will provide a summary email of links to messages that can be forwarded to police, while Facebook has no such system in place.

Prosecution for harassment can result in up to a six-month imprisonment and a fine, and threats to kill carry a possible sentence of 10 years’ imprisonment, but attribution is difficult.

Social media platforms can be used by people anonymously or with fake profiles, with little in the way of verification. At the same time, harassment from other jurisdictions makes prosecution of offenders difficult.

What are others doing?

Germany is leading the way with legislation. Its new Network Enforcement Act, passed in June, requires social networks with 2 million or more users to remove content that is “clearly illegal” within 24 hours of it being notified of its existence, with fines of up to €50m (£44m) possible.

The EU also recently warned tech firms that they must remove hate speech and extremist content faster, or face regulation, requiring greater use of automatic detection systems.

How important is the UK to social media companies?

The scale of many social media platforms means the UK is a relatively small part of their community. Facebook has more than 2 billion active monthly users, of which only around 2%, or 40 million, are in the UK. Of Facebook’s approximate £20bn global revenue in 2016, the UK accounted for under £850m.

In many ways, that leaves the UK as relatively small fry on such a large global scale. While tough legislation could lead to fines and political pressure, experts consider pan-European action by the European commission the only real option for effective regulation of social media companies.