*A workshop with practitioners was held at the LSE on 2nd March 2018. Read a summary of the discussion here.
Platforms – or networks, as some of them, like Amazon, are better described – play an increasingly central role in our lives. We turn to them for information, for entertainment and increasingly look to them to create and sustain our relationships. They, in turn, have worked hard to create dependency among their users so as to maximise the time we spend on them.
They have changed the way we get our news, enabling new media outlets to thrive while challenging the hegemony of others. As traditional broadcast and print media decline and the smartphone has become ubiquitous in Western countries, news organisations have increasingly turned to platforms in an effort to attract readers.
In the 2000s, this strategy was based on search engine optimisation - trying to ensure that Google indexed their content among the top results for a particular search term. As Facebook and Twitter gained popularity in the 2010s, news organisations sought to make their content ‘viral’ and ‘shareable’, with popular content retweeted or shared many thousands of times. Initially much of this was lifestyle content - an approach pioneered by BuzzFeed - but during the 2016 US presidential campaign, the EU referendum and the 2017 General Election, a great deal of hyperpartisan content was shared. Some of this could be described as misinformation and some disinformation. Much of it came from sources that would previously have struggled to gain traction among the public.
The extent to which Brexit, Trump’s election and other political contests were influenced by disinformation shared via the platforms is still under investigation. It is unclear whether Facebook profited significantly from hyperpartisan advertising, and many of the accounts sharing this kind of content on Twitter may have been bots. What is clear is that platforms have become a means of disseminating hyperpartisan content and rarely, as their creators hoped, a forum for productive debate about politics. Users - deliberately and not - see a ‘filter bubble’ of news which offers them more of the type of content they have previously liked or discussed. The lack of transparency around the algorithms networks use to show content to different users has been criticised: the Guardian suggested that YouTube’s algorithms may have been deliberately ‘gamed’ before the 2016 US presidential election to spread misinformation about Hillary Clinton.
Platforms have also been criticised for failing to remove hate speech (despite signing up to an EU code of conduct in 2016) or to crack down quickly enough on both illegal content (such as child sexual abuse material or incitement to terrorism) and that considered harmful, particularly to young people. The addictive nature of social media tools has begun to come under more scrutiny, notably from the Center for Humane Technology. Some publishers, meanwhile - especially those concerned about their dependency on platforms to distribute content - have stepped up calls for regulation. Unilever has threatened to pull advertising from Google (which owns YouTube) and Facebook unless they crack down on illegal and extremist content, citing consumer pressure.
Yet calls for the platforms to take a more proactive role in weeding out misinformation and hate speech have raised fears that they might become the ultimate arbiters of what constitutes unacceptable content. The Intermediary immunity established by the US Communications Decency Act, as Jonathan Zittrain has written, has had advantages as well as disadvantages. Rebecca McKinnon of the Ranking Digital Rights Project has long pointed out the dangers of handing too much power to networks, and of expecting them to defend free speech in the face of government pressure. Germany’s Network Enforcement Act, which became law in January 2018, has already been denounced by opposition parties for its potential to conflate hate speech and free speech. Platforms themselves have no desire to adopt the role of arbiter, and have lobbied governments against creating such laws. They have, however, sought to fund ‘good’ journalism through initiatives such as the Google Digital News Initiative and the Facebook Journalism Project.
Platforms contend that they are neutral players in an ecosystem created by their users. Governments in Europe have begun to take a different line, passing laws that compel platforms to take down offensive content within a short time, introducing the General Data Protection Regulation to protect privacy, and launching investigations such as the Commons select committee inquiry into ‘fake news’. Facebook now deploys a large number of moderators and AI tools in an effort to control the spread of illegal content. It has experimented, largely unsuccessfully, with flagging up questionable content.
In recent months -with the aim of making time spent on Facebook ' time well spent', in CEO Mark Zuckerberg's words - the platform has announced a number of changes to its News Feed which are likely to have deep ramifications for publishers:
- Users will see less news, unless they actively choose to prioritise an outlet
- Local and hyperlocal news, on the other hand, will be boosted
- Facebook will survey users about the perceived trustworthiness of different news brands, and use this ranking to choose what to show in the News Feed
- Groups - which are normally private - will be encouraged, because people are increasingly cautious about sharing posts with different social circles.
Misinformation and illegal or extremist content are not the only concerns: antitrust issues are another. Google was fined €2.4bn by the EU in 2017 for prioritising its own shopping service in search results. The addictive qualities of social media sites, which exploit the basic human need for affirmation in order to keep capture as much of their users’ attention as possible, have also come under scrutiny.
Meanwhile, the question of who owns the information on networks and platforms – and by extension, who can profit from it – remains fundamentally unresolved. To a platform that either sells products or sells information about its users to advertisers, a user’s greatest value lies in the data trail they leave behind as they visit sites and share and interact with content.
The increasing sophistication of AI and an imminent move away from text-based search poses extra challenges. For example, voice-controlled devices make it more difficult to deliver ‘balanced’ search results and raise further antitrust issues.
Our Commission will consider:
- What would a healthy platform ecosystem look like? Would it
- Foster creative content production?
- Provide a space for dissenting/diverse opinions?
- Promote social cohesion and democratic debate?
- Be of public value?
- Offer protection from illegal and/or harmful content?
- What is the ‘crisis’? What is the likely outcome of no intervention in the UK market? Will the market self-correct in time? If - thanks to advertiser pressures – it does, will this have a chilling effect on freedom of speech?
- What behaviours and practices should be part of platform companies’ duties and responsibilities?
- What new measures would create incentives for platform owners to behave as ‘responsible guardians’? For example:
- Better enforcement of existing legislation or standards (verification fact checking and trust marks)?
- New legislation/regulatory powers - through government regulation, co-regulation or self-regulation (codes of conduct)
- A levy on (large) platform owners to support public service media, a sustainable press and/or digital/media literacy training?
- An independent body to monitor platform behaviour/practice with changes to liability?
- What lessons can we draw from legislative measures in Germany and elsewhere?
- Is it realistic to require platforms to be transparent about their algorithms and content moderation?
Regulation of intermediaries
There have been some efforts at a governmental level to address how intermediaries should deal with misinformation.
The Network Enforcement Act imposes fines on companies that do not take down hate speech promptly.
President Emmanuel Macron wants to introduce legislation that could block social media websites during election periods if they are found to be distributing misinformation.
The UN, however, has warned against reactions that call for limits on freedom of expression. A Joint Declaration on freedom of expression, focusing on “so-called fake news, disinformation and propaganda” was issued by the United Nations Special Rapporteur on Freedom of opinion and expression, David Kaye, in March 2017:
“General prohibitions on the dissemination of information based on vague and ambiguous ideas, including “false news” or “non-objective information”, are incompatible with international standards for restrictions on freedom of expression.”
The Honest Ads Act, introduced in the Senate in October 2017, would hold social media and other online platforms to the same political advertising transparency requirements that bind cable and broadcast systems.